00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3693 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3294 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.150 > git --version # 'git version 2.39.2' 00:00:00.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.589 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.601 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.612 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.612 > git config core.sparsecheckout # timeout=10 00:00:04.621 > git read-tree -mu HEAD # timeout=10 00:00:04.637 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.665 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.666 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.743 [Pipeline] Start of Pipeline 00:00:04.755 [Pipeline] library 00:00:04.756 Loading library shm_lib@master 00:00:04.756 Library shm_lib@master is cached. Copying from home. 00:00:04.775 [Pipeline] node 00:00:04.786 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.787 [Pipeline] { 00:00:04.797 [Pipeline] catchError 00:00:04.799 [Pipeline] { 00:00:04.810 [Pipeline] wrap 00:00:04.819 [Pipeline] { 00:00:04.828 [Pipeline] stage 00:00:04.829 [Pipeline] { (Prologue) 00:00:05.005 [Pipeline] sh 00:00:05.282 + logger -p user.info -t JENKINS-CI 00:00:05.299 [Pipeline] echo 00:00:05.301 Node: GP11 00:00:05.308 [Pipeline] sh 00:00:05.602 [Pipeline] setCustomBuildProperty 00:00:05.611 [Pipeline] echo 00:00:05.612 Cleanup processes 00:00:05.615 [Pipeline] sh 00:00:05.888 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.889 594104 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.899 [Pipeline] sh 00:00:06.177 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.177 ++ awk '{print $1}' 00:00:06.177 ++ grep -v 'sudo pgrep' 00:00:06.177 + sudo kill -9 00:00:06.177 + true 00:00:06.189 [Pipeline] cleanWs 00:00:06.196 [WS-CLEANUP] Deleting project workspace... 00:00:06.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.201 [WS-CLEANUP] done 00:00:06.204 [Pipeline] setCustomBuildProperty 00:00:06.217 [Pipeline] sh 00:00:06.496 + sudo git config --global --replace-all safe.directory '*' 00:00:06.556 [Pipeline] httpRequest 00:00:06.592 [Pipeline] echo 00:00:06.594 Sorcerer 10.211.164.101 is alive 00:00:06.600 [Pipeline] httpRequest 00:00:06.604 HttpMethod: GET 00:00:06.604 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.605 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.629 Response Code: HTTP/1.1 200 OK 00:00:06.629 Success: Status code 200 is in the accepted range: 200,404 00:00:06.629 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:31.069 [Pipeline] sh 00:00:31.351 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:31.366 [Pipeline] httpRequest 00:00:31.398 [Pipeline] echo 00:00:31.400 Sorcerer 10.211.164.101 is alive 00:00:31.408 [Pipeline] httpRequest 00:00:31.412 HttpMethod: GET 00:00:31.413 URL: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:31.413 Sending request to url: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:31.433 Response Code: HTTP/1.1 200 OK 00:00:31.434 Success: Status code 200 is in the accepted range: 200,404 00:00:31.435 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:01:30.386 [Pipeline] sh 00:01:30.667 + tar --no-same-owner -xf spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:01:33.958 [Pipeline] sh 00:01:34.240 + git -C spdk log --oneline -n5 00:01:34.240 d005e023b raid: fix empty slot not updated in sb after resize 00:01:34.240 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:34.240 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:01:34.240 19f5787c8 raid: skip configured base bdevs in sb examine 00:01:34.240 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:01:34.259 [Pipeline] withCredentials 00:01:34.269 > git --version # timeout=10 00:01:34.282 > git --version # 'git version 2.39.2' 00:01:34.299 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:34.302 [Pipeline] { 00:01:34.312 [Pipeline] retry 00:01:34.314 [Pipeline] { 00:01:34.332 [Pipeline] sh 00:01:34.614 + git ls-remote http://dpdk.org/git/dpdk main 00:01:34.885 [Pipeline] } 00:01:34.907 [Pipeline] // retry 00:01:34.912 [Pipeline] } 00:01:34.935 [Pipeline] // withCredentials 00:01:34.945 [Pipeline] httpRequest 00:01:34.967 [Pipeline] echo 00:01:34.969 Sorcerer 10.211.164.101 is alive 00:01:34.977 [Pipeline] httpRequest 00:01:34.982 HttpMethod: GET 00:01:34.983 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:34.984 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:34.985 Response Code: HTTP/1.1 200 OK 00:01:34.986 Success: Status code 200 is in the accepted range: 200,404 00:01:34.986 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:39.161 [Pipeline] sh 00:01:39.439 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:40.822 [Pipeline] sh 00:01:41.101 + git -C dpdk log --oneline -n5 00:01:41.101 82c47f005b version: 24.07-rc3 00:01:41.101 d9d1be537e doc: remove reference to mbuf pkt field 00:01:41.101 52c7393a03 doc: set required MinGW version in Windows guide 00:01:41.101 92439dc9ac dts: improve starting and stopping interactive shells 00:01:41.101 2b648cd4e4 dts: add context manager for interactive shells 00:01:41.111 [Pipeline] } 00:01:41.127 [Pipeline] // stage 00:01:41.137 [Pipeline] stage 00:01:41.139 [Pipeline] { (Prepare) 00:01:41.158 [Pipeline] writeFile 00:01:41.175 [Pipeline] sh 00:01:41.454 + logger -p user.info -t JENKINS-CI 00:01:41.466 [Pipeline] sh 00:01:41.747 + logger -p user.info -t JENKINS-CI 00:01:41.759 [Pipeline] sh 00:01:42.073 + cat autorun-spdk.conf 00:01:42.073 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.073 SPDK_TEST_NVMF=1 00:01:42.073 SPDK_TEST_NVME_CLI=1 00:01:42.073 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.073 SPDK_TEST_NVMF_NICS=e810 00:01:42.073 SPDK_TEST_VFIOUSER=1 00:01:42.073 SPDK_RUN_UBSAN=1 00:01:42.073 NET_TYPE=phy 00:01:42.073 SPDK_TEST_NATIVE_DPDK=main 00:01:42.073 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.079 RUN_NIGHTLY=1 00:01:42.086 [Pipeline] readFile 00:01:42.108 [Pipeline] withEnv 00:01:42.110 [Pipeline] { 00:01:42.123 [Pipeline] sh 00:01:42.404 + set -ex 00:01:42.404 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:42.404 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.404 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.404 ++ SPDK_TEST_NVMF=1 00:01:42.404 ++ SPDK_TEST_NVME_CLI=1 00:01:42.404 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.404 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.404 ++ SPDK_TEST_VFIOUSER=1 00:01:42.404 ++ SPDK_RUN_UBSAN=1 00:01:42.404 ++ NET_TYPE=phy 00:01:42.404 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:42.404 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.404 ++ RUN_NIGHTLY=1 00:01:42.404 + case $SPDK_TEST_NVMF_NICS in 00:01:42.404 + DRIVERS=ice 00:01:42.404 + [[ tcp == \r\d\m\a ]] 00:01:42.404 + [[ -n ice ]] 00:01:42.404 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:42.404 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:42.404 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:42.404 rmmod: ERROR: Module irdma is not currently loaded 00:01:42.404 rmmod: ERROR: Module i40iw is not currently loaded 00:01:42.404 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:42.404 + true 00:01:42.404 + for D in $DRIVERS 00:01:42.404 + sudo modprobe ice 00:01:42.404 + exit 0 00:01:42.413 [Pipeline] } 00:01:42.430 [Pipeline] // withEnv 00:01:42.436 [Pipeline] } 00:01:42.452 [Pipeline] // stage 00:01:42.463 [Pipeline] catchError 00:01:42.465 [Pipeline] { 00:01:42.480 [Pipeline] timeout 00:01:42.481 Timeout set to expire in 50 min 00:01:42.483 [Pipeline] { 00:01:42.499 [Pipeline] stage 00:01:42.501 [Pipeline] { (Tests) 00:01:42.517 [Pipeline] sh 00:01:42.799 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.799 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.799 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.799 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:42.799 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.799 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:42.799 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:42.799 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:42.799 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:42.799 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:42.799 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:42.799 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.799 + source /etc/os-release 00:01:42.799 ++ NAME='Fedora Linux' 00:01:42.799 ++ VERSION='38 (Cloud Edition)' 00:01:42.799 ++ ID=fedora 00:01:42.799 ++ VERSION_ID=38 00:01:42.799 ++ VERSION_CODENAME= 00:01:42.799 ++ PLATFORM_ID=platform:f38 00:01:42.799 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:42.799 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:42.799 ++ LOGO=fedora-logo-icon 00:01:42.799 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:42.799 ++ HOME_URL=https://fedoraproject.org/ 00:01:42.799 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:42.799 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:42.799 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:42.799 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:42.799 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:42.799 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:42.799 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:42.799 ++ SUPPORT_END=2024-05-14 00:01:42.799 ++ VARIANT='Cloud Edition' 00:01:42.799 ++ VARIANT_ID=cloud 00:01:42.799 + uname -a 00:01:42.799 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:42.799 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:43.733 Hugepages 00:01:43.733 node hugesize free / total 00:01:43.733 node0 1048576kB 0 / 0 00:01:43.733 node0 2048kB 0 / 0 00:01:43.733 node1 1048576kB 0 / 0 00:01:43.733 node1 2048kB 0 / 0 00:01:43.733 00:01:43.733 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.733 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:43.733 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:43.733 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:43.733 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:43.733 + rm -f /tmp/spdk-ld-path 00:01:43.733 + source autorun-spdk.conf 00:01:43.733 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.733 ++ SPDK_TEST_NVMF=1 00:01:43.733 ++ SPDK_TEST_NVME_CLI=1 00:01:43.733 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.733 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.733 ++ SPDK_TEST_VFIOUSER=1 00:01:43.733 ++ SPDK_RUN_UBSAN=1 00:01:43.733 ++ NET_TYPE=phy 00:01:43.733 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:43.733 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.733 ++ RUN_NIGHTLY=1 00:01:43.733 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.733 + [[ -n '' ]] 00:01:43.733 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.733 + for M in /var/spdk/build-*-manifest.txt 00:01:43.733 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.733 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.733 + for M in /var/spdk/build-*-manifest.txt 00:01:43.733 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.733 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.733 ++ uname 00:01:43.733 + [[ Linux == \L\i\n\u\x ]] 00:01:43.733 + sudo dmesg -T 00:01:43.991 + sudo dmesg --clear 00:01:43.991 + dmesg_pid=594813 00:01:43.991 + [[ Fedora Linux == FreeBSD ]] 00:01:43.991 + sudo dmesg -Tw 00:01:43.991 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.991 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.991 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.991 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.991 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.991 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.991 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.991 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.991 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.991 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.991 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.991 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.991 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.991 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.991 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.991 Test configuration: 00:01:43.991 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.991 SPDK_TEST_NVMF=1 00:01:43.991 SPDK_TEST_NVME_CLI=1 00:01:43.991 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.991 SPDK_TEST_NVMF_NICS=e810 00:01:43.991 SPDK_TEST_VFIOUSER=1 00:01:43.991 SPDK_RUN_UBSAN=1 00:01:43.991 NET_TYPE=phy 00:01:43.991 SPDK_TEST_NATIVE_DPDK=main 00:01:43.991 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.991 RUN_NIGHTLY=1 03:44:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.991 03:44:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.991 03:44:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.991 03:44:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.991 03:44:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.991 03:44:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.991 03:44:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.991 03:44:59 -- paths/export.sh@5 -- $ export PATH 00:01:43.992 03:44:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.992 03:44:59 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.992 03:44:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:43.992 03:44:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721871899.XXXXXX 00:01:43.992 03:44:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721871899.DuPEF1 00:01:43.992 03:44:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:43.992 03:44:59 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:43.992 03:44:59 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.992 03:44:59 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:43.992 03:44:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.992 03:44:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.992 03:44:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:43.992 03:44:59 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:43.992 03:44:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.992 03:44:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:43.992 03:44:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:43.992 03:44:59 -- pm/common@17 -- $ local monitor 00:01:43.992 03:44:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.992 03:44:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.992 03:44:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.992 03:44:59 -- pm/common@21 -- $ date +%s 00:01:43.992 03:44:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.992 03:44:59 -- pm/common@21 -- $ date +%s 00:01:43.992 03:44:59 -- pm/common@25 -- $ sleep 1 00:01:43.992 03:44:59 -- pm/common@21 -- $ date +%s 00:01:43.992 03:44:59 -- pm/common@21 -- $ date +%s 00:01:43.992 03:44:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721871899 00:01:43.992 03:44:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721871899 00:01:43.992 03:44:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721871899 00:01:43.992 03:44:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721871899 00:01:43.992 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721871899_collect-vmstat.pm.log 00:01:43.992 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721871899_collect-cpu-load.pm.log 00:01:43.992 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721871899_collect-cpu-temp.pm.log 00:01:43.992 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721871899_collect-bmc-pm.bmc.pm.log 00:01:44.926 03:45:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:44.926 03:45:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.926 03:45:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.926 03:45:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.926 03:45:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.926 Thu Jul 25 01:45:00 AM UTC 2024 00:01:44.926 03:45:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.926 v24.09-pre-318-gd005e023b 00:01:44.926 03:45:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.926 03:45:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.926 03:45:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.926 03:45:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:44.926 03:45:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:44.926 03:45:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.926 ************************************ 00:01:44.926 START TEST ubsan 00:01:44.926 ************************************ 00:01:44.926 03:45:00 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:44.926 using ubsan 00:01:44.926 00:01:44.926 real 0m0.000s 00:01:44.926 user 0m0.000s 00:01:44.926 sys 0m0.000s 00:01:44.926 03:45:00 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:44.926 03:45:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.926 ************************************ 00:01:44.926 END TEST ubsan 00:01:44.926 ************************************ 00:01:44.926 03:45:00 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:44.926 03:45:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:44.926 03:45:00 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:44.926 03:45:00 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:44.926 03:45:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:44.926 03:45:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.926 ************************************ 00:01:44.926 START TEST build_native_dpdk 00:01:44.926 ************************************ 00:01:44.926 03:45:00 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.926 03:45:00 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:45.184 82c47f005b version: 24.07-rc3 00:01:45.184 d9d1be537e doc: remove reference to mbuf pkt field 00:01:45.184 52c7393a03 doc: set required MinGW version in Windows guide 00:01:45.184 92439dc9ac dts: improve starting and stopping interactive shells 00:01:45.184 2b648cd4e4 dts: add context manager for interactive shells 00:01:45.184 03:45:00 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:45.185 patching file config/rte_config.h 00:01:45.185 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:45.185 03:45:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:45.185 03:45:00 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.369 The Meson build system 00:01:49.369 Version: 1.3.1 00:01:49.369 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:49.369 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:49.369 Build type: native build 00:01:49.369 Program cat found: YES (/usr/bin/cat) 00:01:49.369 Project name: DPDK 00:01:49.369 Project version: 24.07.0-rc3 00:01:49.369 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.369 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:49.369 Host machine cpu family: x86_64 00:01:49.369 Host machine cpu: x86_64 00:01:49.369 Message: ## Building in Developer Mode ## 00:01:49.369 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.369 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:49.369 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.369 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:49.369 Program cat found: YES (/usr/bin/cat) 00:01:49.369 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:49.369 Compiler for C supports arguments -march=native: YES 00:01:49.369 Checking for size of "void *" : 8 00:01:49.369 Checking for size of "void *" : 8 (cached) 00:01:49.369 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:49.369 Library m found: YES 00:01:49.369 Library numa found: YES 00:01:49.369 Has header "numaif.h" : YES 00:01:49.369 Library fdt found: NO 00:01:49.369 Library execinfo found: NO 00:01:49.369 Has header "execinfo.h" : YES 00:01:49.369 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.369 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.369 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.369 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.369 Run-time dependency openssl found: YES 3.0.9 00:01:49.369 Run-time dependency libpcap found: YES 1.10.4 00:01:49.369 Has header "pcap.h" with dependency libpcap: YES 00:01:49.369 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.369 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.369 Compiler for C supports arguments -Wformat: YES 00:01:49.369 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.369 Compiler for C supports arguments -Wformat-security: NO 00:01:49.369 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.369 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.369 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.369 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.369 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.369 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.369 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.369 Compiler for C supports arguments -Wundef: YES 00:01:49.369 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.369 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.369 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.369 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.369 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.369 Program objdump found: YES (/usr/bin/objdump) 00:01:49.369 Compiler for C supports arguments -mavx512f: YES 00:01:49.369 Checking if "AVX512 checking" compiles: YES 00:01:49.369 Fetching value of define "__SSE4_2__" : 1 00:01:49.369 Fetching value of define "__AES__" : 1 00:01:49.369 Fetching value of define "__AVX__" : 1 00:01:49.369 Fetching value of define "__AVX2__" : (undefined) 00:01:49.369 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.369 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.369 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.369 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.369 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.369 Fetching value of define "__PCLMUL__" : 1 00:01:49.369 Fetching value of define "__RDRND__" : 1 00:01:49.369 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.369 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.369 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.369 Message: lib/log: Defining dependency "log" 00:01:49.369 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.369 Message: lib/argparse: Defining dependency "argparse" 00:01:49.369 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.369 Checking for function "getentropy" : NO 00:01:49.369 Message: lib/eal: Defining dependency "eal" 00:01:49.369 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:49.369 Message: lib/ring: Defining dependency "ring" 00:01:49.369 Message: lib/rcu: Defining dependency "rcu" 00:01:49.369 Message: lib/mempool: Defining dependency "mempool" 00:01:49.369 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.369 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.369 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.369 Compiler for C supports arguments -mpclmul: YES 00:01:49.369 Compiler for C supports arguments -maes: YES 00:01:49.369 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.369 Compiler for C supports arguments -mavx512bw: YES 00:01:49.369 Compiler for C supports arguments -mavx512dq: YES 00:01:49.369 Compiler for C supports arguments -mavx512vl: YES 00:01:49.369 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.369 Compiler for C supports arguments -mavx2: YES 00:01:49.369 Compiler for C supports arguments -mavx: YES 00:01:49.369 Message: lib/net: Defining dependency "net" 00:01:49.369 Message: lib/meter: Defining dependency "meter" 00:01:49.369 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.369 Message: lib/pci: Defining dependency "pci" 00:01:49.369 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.369 Message: lib/metrics: Defining dependency "metrics" 00:01:49.369 Message: lib/hash: Defining dependency "hash" 00:01:49.369 Message: lib/timer: Defining dependency "timer" 00:01:49.369 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:49.369 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:49.369 Message: lib/acl: Defining dependency "acl" 00:01:49.369 Message: lib/bbdev: Defining dependency "bbdev" 00:01:49.369 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:49.369 Run-time dependency libelf found: YES 0.190 00:01:49.369 Message: lib/bpf: Defining dependency "bpf" 00:01:49.369 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:49.369 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.369 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.369 Message: lib/distributor: Defining dependency "distributor" 00:01:49.369 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.369 Message: lib/efd: Defining dependency "efd" 00:01:49.369 Message: lib/eventdev: Defining dependency "eventdev" 00:01:49.369 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:49.369 Message: lib/gpudev: Defining dependency "gpudev" 00:01:49.369 Message: lib/gro: Defining dependency "gro" 00:01:49.369 Message: lib/gso: Defining dependency "gso" 00:01:49.369 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:49.369 Message: lib/jobstats: Defining dependency "jobstats" 00:01:49.369 Message: lib/latencystats: Defining dependency "latencystats" 00:01:49.369 Message: lib/lpm: Defining dependency "lpm" 00:01:49.369 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:49.369 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:49.369 Message: lib/member: Defining dependency "member" 00:01:49.369 Message: lib/pcapng: Defining dependency "pcapng" 00:01:49.369 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.369 Message: lib/power: Defining dependency "power" 00:01:49.369 Message: lib/rawdev: Defining dependency "rawdev" 00:01:49.369 Message: lib/regexdev: Defining dependency "regexdev" 00:01:49.369 Message: lib/mldev: Defining dependency "mldev" 00:01:49.369 Message: lib/rib: Defining dependency "rib" 00:01:49.369 Message: lib/reorder: Defining dependency "reorder" 00:01:49.369 Message: lib/sched: Defining dependency "sched" 00:01:49.369 Message: lib/security: Defining dependency "security" 00:01:49.369 Message: lib/stack: Defining dependency "stack" 00:01:49.369 Has header "linux/userfaultfd.h" : YES 00:01:49.369 Has header "linux/vduse.h" : YES 00:01:49.369 Message: lib/vhost: Defining dependency "vhost" 00:01:49.369 Message: lib/ipsec: Defining dependency "ipsec" 00:01:49.369 Message: lib/pdcp: Defining dependency "pdcp" 00:01:49.369 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.369 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.369 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:49.369 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.369 Message: lib/fib: Defining dependency "fib" 00:01:49.369 Message: lib/port: Defining dependency "port" 00:01:49.369 Message: lib/pdump: Defining dependency "pdump" 00:01:49.369 Message: lib/table: Defining dependency "table" 00:01:49.369 Message: lib/pipeline: Defining dependency "pipeline" 00:01:49.369 Message: lib/graph: Defining dependency "graph" 00:01:49.369 Message: lib/node: Defining dependency "node" 00:01:50.747 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.747 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.747 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.747 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.747 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:50.747 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.747 Compiler for C supports arguments -Wno-format: YES 00:01:50.747 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.747 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:50.747 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:50.747 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:50.747 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:50.747 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.747 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.747 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.747 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:50.747 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:50.747 Has header "sys/epoll.h" : YES 00:01:50.747 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.747 Configuring doxy-api-html.conf using configuration 00:01:50.747 Configuring doxy-api-man.conf using configuration 00:01:50.747 Program mandb found: YES (/usr/bin/mandb) 00:01:50.747 Program sphinx-build found: NO 00:01:50.747 Configuring rte_build_config.h using configuration 00:01:50.747 Message: 00:01:50.747 ================= 00:01:50.747 Applications Enabled 00:01:50.747 ================= 00:01:50.747 00:01:50.747 apps: 00:01:50.747 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:50.747 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:50.747 test-pmd, test-regex, test-sad, test-security-perf, 00:01:50.747 00:01:50.747 Message: 00:01:50.747 ================= 00:01:50.747 Libraries Enabled 00:01:50.747 ================= 00:01:50.747 00:01:50.747 libs: 00:01:50.747 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:50.747 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:50.747 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:50.747 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:50.747 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:50.747 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:50.747 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:50.747 graph, node, 00:01:50.747 00:01:50.747 Message: 00:01:50.747 =============== 00:01:50.747 Drivers Enabled 00:01:50.747 =============== 00:01:50.747 00:01:50.747 common: 00:01:50.747 00:01:50.747 bus: 00:01:50.747 pci, vdev, 00:01:50.747 mempool: 00:01:50.747 ring, 00:01:50.747 dma: 00:01:50.747 00:01:50.747 net: 00:01:50.747 i40e, 00:01:50.747 raw: 00:01:50.747 00:01:50.747 crypto: 00:01:50.747 00:01:50.747 compress: 00:01:50.747 00:01:50.747 regex: 00:01:50.747 00:01:50.747 ml: 00:01:50.747 00:01:50.747 vdpa: 00:01:50.747 00:01:50.747 event: 00:01:50.747 00:01:50.747 baseband: 00:01:50.747 00:01:50.747 gpu: 00:01:50.747 00:01:50.747 00:01:50.747 Message: 00:01:50.747 ================= 00:01:50.747 Content Skipped 00:01:50.747 ================= 00:01:50.747 00:01:50.747 apps: 00:01:50.747 00:01:50.747 libs: 00:01:50.747 00:01:50.747 drivers: 00:01:50.747 common/cpt: not in enabled drivers build config 00:01:50.747 common/dpaax: not in enabled drivers build config 00:01:50.747 common/iavf: not in enabled drivers build config 00:01:50.747 common/idpf: not in enabled drivers build config 00:01:50.747 common/ionic: not in enabled drivers build config 00:01:50.747 common/mvep: not in enabled drivers build config 00:01:50.747 common/octeontx: not in enabled drivers build config 00:01:50.747 bus/auxiliary: not in enabled drivers build config 00:01:50.747 bus/cdx: not in enabled drivers build config 00:01:50.747 bus/dpaa: not in enabled drivers build config 00:01:50.747 bus/fslmc: not in enabled drivers build config 00:01:50.747 bus/ifpga: not in enabled drivers build config 00:01:50.747 bus/platform: not in enabled drivers build config 00:01:50.747 bus/uacce: not in enabled drivers build config 00:01:50.747 bus/vmbus: not in enabled drivers build config 00:01:50.747 common/cnxk: not in enabled drivers build config 00:01:50.747 common/mlx5: not in enabled drivers build config 00:01:50.747 common/nfp: not in enabled drivers build config 00:01:50.747 common/nitrox: not in enabled drivers build config 00:01:50.747 common/qat: not in enabled drivers build config 00:01:50.747 common/sfc_efx: not in enabled drivers build config 00:01:50.747 mempool/bucket: not in enabled drivers build config 00:01:50.747 mempool/cnxk: not in enabled drivers build config 00:01:50.747 mempool/dpaa: not in enabled drivers build config 00:01:50.747 mempool/dpaa2: not in enabled drivers build config 00:01:50.747 mempool/octeontx: not in enabled drivers build config 00:01:50.747 mempool/stack: not in enabled drivers build config 00:01:50.747 dma/cnxk: not in enabled drivers build config 00:01:50.747 dma/dpaa: not in enabled drivers build config 00:01:50.747 dma/dpaa2: not in enabled drivers build config 00:01:50.747 dma/hisilicon: not in enabled drivers build config 00:01:50.747 dma/idxd: not in enabled drivers build config 00:01:50.747 dma/ioat: not in enabled drivers build config 00:01:50.747 dma/odm: not in enabled drivers build config 00:01:50.747 dma/skeleton: not in enabled drivers build config 00:01:50.747 net/af_packet: not in enabled drivers build config 00:01:50.747 net/af_xdp: not in enabled drivers build config 00:01:50.747 net/ark: not in enabled drivers build config 00:01:50.747 net/atlantic: not in enabled drivers build config 00:01:50.747 net/avp: not in enabled drivers build config 00:01:50.747 net/axgbe: not in enabled drivers build config 00:01:50.747 net/bnx2x: not in enabled drivers build config 00:01:50.747 net/bnxt: not in enabled drivers build config 00:01:50.747 net/bonding: not in enabled drivers build config 00:01:50.747 net/cnxk: not in enabled drivers build config 00:01:50.747 net/cpfl: not in enabled drivers build config 00:01:50.747 net/cxgbe: not in enabled drivers build config 00:01:50.747 net/dpaa: not in enabled drivers build config 00:01:50.748 net/dpaa2: not in enabled drivers build config 00:01:50.748 net/e1000: not in enabled drivers build config 00:01:50.748 net/ena: not in enabled drivers build config 00:01:50.748 net/enetc: not in enabled drivers build config 00:01:50.748 net/enetfec: not in enabled drivers build config 00:01:50.748 net/enic: not in enabled drivers build config 00:01:50.748 net/failsafe: not in enabled drivers build config 00:01:50.748 net/fm10k: not in enabled drivers build config 00:01:50.748 net/gve: not in enabled drivers build config 00:01:50.748 net/hinic: not in enabled drivers build config 00:01:50.748 net/hns3: not in enabled drivers build config 00:01:50.748 net/iavf: not in enabled drivers build config 00:01:50.748 net/ice: not in enabled drivers build config 00:01:50.748 net/idpf: not in enabled drivers build config 00:01:50.748 net/igc: not in enabled drivers build config 00:01:50.748 net/ionic: not in enabled drivers build config 00:01:50.748 net/ipn3ke: not in enabled drivers build config 00:01:50.748 net/ixgbe: not in enabled drivers build config 00:01:50.748 net/mana: not in enabled drivers build config 00:01:50.748 net/memif: not in enabled drivers build config 00:01:50.748 net/mlx4: not in enabled drivers build config 00:01:50.748 net/mlx5: not in enabled drivers build config 00:01:50.748 net/mvneta: not in enabled drivers build config 00:01:50.748 net/mvpp2: not in enabled drivers build config 00:01:50.748 net/netvsc: not in enabled drivers build config 00:01:50.748 net/nfb: not in enabled drivers build config 00:01:50.748 net/nfp: not in enabled drivers build config 00:01:50.748 net/ngbe: not in enabled drivers build config 00:01:50.748 net/ntnic: not in enabled drivers build config 00:01:50.748 net/null: not in enabled drivers build config 00:01:50.748 net/octeontx: not in enabled drivers build config 00:01:50.748 net/octeon_ep: not in enabled drivers build config 00:01:50.748 net/pcap: not in enabled drivers build config 00:01:50.748 net/pfe: not in enabled drivers build config 00:01:50.748 net/qede: not in enabled drivers build config 00:01:50.748 net/ring: not in enabled drivers build config 00:01:50.748 net/sfc: not in enabled drivers build config 00:01:50.748 net/softnic: not in enabled drivers build config 00:01:50.748 net/tap: not in enabled drivers build config 00:01:50.748 net/thunderx: not in enabled drivers build config 00:01:50.748 net/txgbe: not in enabled drivers build config 00:01:50.748 net/vdev_netvsc: not in enabled drivers build config 00:01:50.748 net/vhost: not in enabled drivers build config 00:01:50.748 net/virtio: not in enabled drivers build config 00:01:50.748 net/vmxnet3: not in enabled drivers build config 00:01:50.748 raw/cnxk_bphy: not in enabled drivers build config 00:01:50.748 raw/cnxk_gpio: not in enabled drivers build config 00:01:50.748 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:50.748 raw/ifpga: not in enabled drivers build config 00:01:50.748 raw/ntb: not in enabled drivers build config 00:01:50.748 raw/skeleton: not in enabled drivers build config 00:01:50.748 crypto/armv8: not in enabled drivers build config 00:01:50.748 crypto/bcmfs: not in enabled drivers build config 00:01:50.748 crypto/caam_jr: not in enabled drivers build config 00:01:50.748 crypto/ccp: not in enabled drivers build config 00:01:50.748 crypto/cnxk: not in enabled drivers build config 00:01:50.748 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.748 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.748 crypto/ionic: not in enabled drivers build config 00:01:50.748 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.748 crypto/mlx5: not in enabled drivers build config 00:01:50.748 crypto/mvsam: not in enabled drivers build config 00:01:50.748 crypto/nitrox: not in enabled drivers build config 00:01:50.748 crypto/null: not in enabled drivers build config 00:01:50.748 crypto/octeontx: not in enabled drivers build config 00:01:50.748 crypto/openssl: not in enabled drivers build config 00:01:50.748 crypto/scheduler: not in enabled drivers build config 00:01:50.748 crypto/uadk: not in enabled drivers build config 00:01:50.748 crypto/virtio: not in enabled drivers build config 00:01:50.748 compress/isal: not in enabled drivers build config 00:01:50.748 compress/mlx5: not in enabled drivers build config 00:01:50.748 compress/nitrox: not in enabled drivers build config 00:01:50.748 compress/octeontx: not in enabled drivers build config 00:01:50.748 compress/uadk: not in enabled drivers build config 00:01:50.748 compress/zlib: not in enabled drivers build config 00:01:50.748 regex/mlx5: not in enabled drivers build config 00:01:50.748 regex/cn9k: not in enabled drivers build config 00:01:50.748 ml/cnxk: not in enabled drivers build config 00:01:50.748 vdpa/ifc: not in enabled drivers build config 00:01:50.748 vdpa/mlx5: not in enabled drivers build config 00:01:50.748 vdpa/nfp: not in enabled drivers build config 00:01:50.748 vdpa/sfc: not in enabled drivers build config 00:01:50.748 event/cnxk: not in enabled drivers build config 00:01:50.748 event/dlb2: not in enabled drivers build config 00:01:50.748 event/dpaa: not in enabled drivers build config 00:01:50.748 event/dpaa2: not in enabled drivers build config 00:01:50.748 event/dsw: not in enabled drivers build config 00:01:50.748 event/opdl: not in enabled drivers build config 00:01:50.748 event/skeleton: not in enabled drivers build config 00:01:50.748 event/sw: not in enabled drivers build config 00:01:50.748 event/octeontx: not in enabled drivers build config 00:01:50.748 baseband/acc: not in enabled drivers build config 00:01:50.748 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:50.748 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:50.748 baseband/la12xx: not in enabled drivers build config 00:01:50.748 baseband/null: not in enabled drivers build config 00:01:50.748 baseband/turbo_sw: not in enabled drivers build config 00:01:50.748 gpu/cuda: not in enabled drivers build config 00:01:50.748 00:01:50.748 00:01:50.748 Build targets in project: 224 00:01:50.748 00:01:50.748 DPDK 24.07.0-rc3 00:01:50.748 00:01:50.748 User defined options 00:01:50.748 libdir : lib 00:01:50.748 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.748 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:50.748 c_link_args : 00:01:50.748 enable_docs : false 00:01:50.748 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.748 enable_kmods : false 00:01:50.748 machine : native 00:01:50.748 tests : false 00:01:50.748 00:01:50.748 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.748 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:50.748 03:45:05 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:50.748 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:50.748 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.748 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.748 [3/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.748 [4/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.012 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.012 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.012 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.012 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.012 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.012 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.012 [11/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.012 [12/723] Linking static target lib/librte_kvargs.a 00:01:51.012 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.012 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.275 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.275 [16/723] Linking static target lib/librte_log.a 00:01:51.275 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:51.275 [18/723] Linking static target lib/librte_argparse.a 00:01:51.275 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.844 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.844 [21/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.844 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.844 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.844 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.844 [25/723] Linking target lib/librte_log.so.24.2 00:01:51.844 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.844 [27/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.845 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.109 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.109 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.109 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.109 [32/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.109 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.109 [34/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.109 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.109 [36/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.109 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.109 [38/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.109 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.109 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.109 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.109 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.109 [43/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.109 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.109 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.109 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.109 [47/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:52.109 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.109 [49/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.109 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.109 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.109 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.109 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.109 [54/723] Linking target lib/librte_kvargs.so.24.2 00:01:52.109 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.109 [56/723] Linking target lib/librte_argparse.so.24.2 00:01:52.109 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.110 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.110 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.110 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.371 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.371 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.371 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.372 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:52.638 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.638 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.638 [67/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.638 [68/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.638 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.638 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.638 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.897 [72/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.897 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.897 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.897 [75/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.897 [76/723] Linking static target lib/librte_pci.a 00:01:53.159 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:53.159 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.159 [79/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.159 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.159 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.159 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.159 [83/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.159 [84/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.159 [85/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.159 [86/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.418 [87/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.418 [88/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.418 [89/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.418 [90/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.418 [91/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.418 [92/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.418 [93/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.418 [94/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.418 [95/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.418 [96/723] Linking static target lib/librte_ring.a 00:01:53.418 [97/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.418 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.418 [99/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.418 [100/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.418 [101/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.418 [102/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.418 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.418 [104/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.418 [105/723] Linking static target lib/librte_meter.a 00:01:53.418 [106/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.418 [107/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.418 [108/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.418 [109/723] Linking static target lib/librte_telemetry.a 00:01:53.418 [110/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.682 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.682 [112/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.682 [113/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.682 [114/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.682 [115/723] Linking static target lib/librte_net.a 00:01:53.682 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.682 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.939 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.939 [119/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.939 [120/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.939 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.939 [122/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.939 [123/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.939 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.939 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.939 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.201 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:54.201 [128/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.201 [129/723] Linking static target lib/librte_mempool.a 00:01:54.201 [130/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.201 [131/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.201 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:54.202 [133/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:54.202 [134/723] Linking static target lib/librte_eal.a 00:01:54.202 [135/723] Linking target lib/librte_telemetry.so.24.2 00:01:54.202 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.461 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.461 [138/723] Linking static target lib/librte_cmdline.a 00:01:54.461 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.461 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.461 [141/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:54.461 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.461 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.461 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:54.461 [145/723] Linking static target lib/librte_cfgfile.a 00:01:54.461 [146/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.461 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:54.461 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:54.461 [149/723] Linking static target lib/librte_metrics.a 00:01:54.461 [150/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:54.722 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:54.722 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:54.722 [153/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.722 [154/723] Linking static target lib/librte_rcu.a 00:01:54.722 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:54.722 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:54.987 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:54.987 [158/723] Linking static target lib/librte_bitratestats.a 00:01:54.987 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:54.987 [160/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:54.987 [161/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:54.987 [162/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.252 [163/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:55.252 [164/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.252 [165/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.252 [166/723] Linking static target lib/librte_mbuf.a 00:01:55.252 [167/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.252 [168/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.252 [169/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.252 [170/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.252 [171/723] Linking static target lib/librte_timer.a 00:01:55.252 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:55.252 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.252 [174/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.512 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:55.512 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:55.512 [177/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.512 [178/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:55.512 [179/723] Linking static target lib/librte_bbdev.a 00:01:55.512 [180/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.512 [181/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.777 [182/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.777 [183/723] Linking static target lib/librte_compressdev.a 00:01:55.777 [184/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:55.777 [185/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.777 [186/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.777 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.777 [188/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.777 [189/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.777 [190/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:55.777 [191/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.039 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:56.039 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.301 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:56.301 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:56.568 [196/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:56.568 [197/723] Linking static target lib/librte_distributor.a 00:01:56.568 [198/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.568 [199/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.568 [200/723] Linking static target lib/librte_dmadev.a 00:01:56.568 [201/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:56.568 [202/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.568 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:56.568 [204/723] Linking static target lib/librte_bpf.a 00:01:56.829 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:56.829 [206/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:56.829 [207/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:56.829 [208/723] Linking static target lib/librte_dispatcher.a 00:01:56.829 [209/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:56.829 [210/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:56.829 [211/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:56.829 [212/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:56.829 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:56.829 [214/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:56.829 [215/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.829 [216/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:56.829 [217/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:56.829 [218/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.829 [219/723] Linking static target lib/librte_gpudev.a 00:01:57.092 [220/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:57.092 [221/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.092 [222/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:57.092 [223/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:57.092 [224/723] Linking static target lib/librte_gro.a 00:01:57.092 [225/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:57.092 [226/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.092 [227/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.092 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:57.092 [229/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.092 [230/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:57.092 [231/723] Linking static target lib/librte_jobstats.a 00:01:57.092 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:57.092 [233/723] Linking static target lib/librte_gso.a 00:01:57.092 [234/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:57.357 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.357 [236/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:57.357 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:57.357 [238/723] Linking static target lib/librte_latencystats.a 00:01:57.357 [239/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:57.357 [240/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.357 [241/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:57.619 [242/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.619 [243/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.619 [244/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:57.619 [245/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.619 [246/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:57.619 [247/723] Linking static target lib/librte_ip_frag.a 00:01:57.619 [248/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:57.619 [249/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:57.619 [250/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:57.883 [251/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:57.883 [252/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:57.883 [253/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:57.883 [254/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.883 [255/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:57.883 [256/723] Linking static target lib/librte_efd.a 00:01:57.883 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:57.883 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:58.146 [259/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.146 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.146 [261/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.146 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:58.146 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:58.485 [264/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.485 [265/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.485 [266/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:58.485 [267/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.485 [268/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.485 [269/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.485 [270/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.485 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:58.752 [272/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:58.752 [273/723] Linking static target lib/librte_regexdev.a 00:01:58.752 [274/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:58.752 [275/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.752 [276/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:58.752 [277/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:58.752 [278/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:58.752 [279/723] Linking static target lib/librte_rawdev.a 00:01:58.752 [280/723] Linking static target lib/librte_pcapng.a 00:01:58.752 [281/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:58.752 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.752 [283/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:58.752 [284/723] Linking static target lib/librte_power.a 00:01:58.752 [285/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:58.752 [286/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:58.752 [287/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:58.752 [288/723] Linking static target lib/librte_stack.a 00:01:58.752 [289/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:58.752 [290/723] Linking static target lib/librte_mldev.a 00:01:58.752 [291/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:58.752 [292/723] Linking static target lib/librte_lpm.a 00:01:58.752 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:59.017 [294/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.017 [295/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.017 [296/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:59.017 [297/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:59.017 [298/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:59.017 [299/723] Linking static target lib/acl/libavx2_tmp.a 00:01:59.277 [300/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.277 [301/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.277 [302/723] Linking static target lib/librte_cryptodev.a 00:01:59.277 [303/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.277 [304/723] Linking static target lib/librte_reorder.a 00:01:59.277 [305/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.277 [306/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.277 [307/723] Linking static target lib/librte_security.a 00:01:59.277 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.277 [309/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.277 [310/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.540 [312/723] Linking static target lib/librte_hash.a 00:01:59.540 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:59.540 [314/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:59.540 [315/723] Linking static target lib/acl/libavx512_tmp.a 00:01:59.540 [316/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [317/723] Linking static target lib/librte_acl.a 00:01:59.803 [318/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.803 [319/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.803 [320/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:59.803 [321/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.803 [322/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.803 [323/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.803 [324/723] Linking static target lib/librte_rib.a 00:01:59.803 [325/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.803 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:59.803 [327/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:59.803 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:59.803 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:00.064 [330/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:00.064 [331/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:00.064 [332/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.064 [333/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:00.064 [334/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:00.064 [335/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:00.064 [336/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:00.064 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:00.064 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:00.064 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.324 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:00.324 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.324 [342/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.593 [343/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:00.593 [344/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:00.851 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:00.851 [346/723] Linking static target lib/librte_eventdev.a 00:02:00.851 [347/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.851 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:01.110 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:01.110 [350/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:01.110 [351/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.110 [352/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:01.110 [353/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:01.110 [354/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:01.110 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:01.110 [356/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:01.110 [357/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:01.110 [358/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.369 [359/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:01.369 [360/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.369 [361/723] Linking static target lib/librte_member.a 00:02:01.369 [362/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:01.369 [363/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:01.369 [364/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:01.369 [365/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:01.369 [366/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:01.369 [367/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:01.369 [368/723] Linking static target lib/librte_sched.a 00:02:01.369 [369/723] Linking static target lib/librte_fib.a 00:02:01.369 [370/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:01.369 [371/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:01.369 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:01.369 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:01.628 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:01.628 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:01.628 [376/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:01.628 [377/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.628 [378/723] Linking static target lib/librte_ethdev.a 00:02:01.628 [379/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:01.628 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:01.628 [381/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.890 [382/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:01.890 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:01.890 [384/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:01.890 [385/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:01.890 [386/723] Linking static target lib/librte_ipsec.a 00:02:01.890 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.890 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.150 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:02.150 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.150 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:02.412 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:02.412 [393/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:02.412 [394/723] Linking static target lib/librte_pdump.a 00:02:02.412 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:02.412 [396/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:02.412 [397/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:02.412 [398/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.412 [399/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:02.412 [400/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.412 [401/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:02.412 [402/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:02.671 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:02.671 [404/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:02.671 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:02.671 [406/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:02.671 [407/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:02.671 [408/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:02.671 [409/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.934 [410/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:02.934 [411/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:02.934 [412/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:02.934 [413/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:02.934 [414/723] Linking static target lib/librte_pdcp.a 00:02:02.934 [415/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:02.934 [416/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:02.934 [417/723] Linking static target lib/librte_table.a 00:02:02.934 [418/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:03.192 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:03.192 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:03.192 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.453 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:03.453 [423/723] Linking static target lib/librte_graph.a 00:02:03.453 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.453 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.453 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.453 [427/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:03.453 [428/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.453 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:03.719 [430/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:03.719 [431/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:03.719 [432/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:03.719 [433/723] Linking static target lib/librte_port.a 00:02:03.719 [434/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:03.719 [435/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:03.719 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:03.979 [437/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:03.979 [438/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.979 [439/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.979 [440/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:03.979 [441/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:04.238 [442/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.238 [443/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.238 [444/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.238 [445/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.238 [446/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.238 [447/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.238 [448/723] Linking static target drivers/librte_bus_vdev.a 00:02:04.238 [449/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.238 [450/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.238 [451/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.498 [452/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.498 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:04.498 [454/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:04.498 [455/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.498 [456/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:04.498 [457/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.498 [458/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.498 [459/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:04.498 [460/723] Linking static target drivers/librte_bus_pci.a 00:02:04.498 [461/723] Linking static target lib/librte_node.a 00:02:04.498 [462/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:04.760 [463/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:04.760 [464/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.760 [465/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.760 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:04.760 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:04.760 [468/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:04.761 [469/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:05.023 [470/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.023 [471/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:05.023 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.023 [473/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.284 [474/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:05.284 [475/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:05.284 [476/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:05.284 [477/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.284 [478/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:05.284 [479/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.284 [480/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.284 [481/723] Linking static target drivers/librte_mempool_ring.a 00:02:05.543 [482/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:05.543 [483/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:05.543 [484/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.543 [485/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.543 [486/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:05.543 [487/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.543 [488/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:05.543 [489/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:05.543 [490/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:05.543 [491/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:05.803 [492/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:05.803 [493/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:05.803 [494/723] Linking target lib/librte_eal.so.24.2 00:02:05.803 [495/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:05.803 [496/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.803 [497/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:06.067 [498/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:06.067 [499/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:06.067 [500/723] Linking target lib/librte_ring.so.24.2 00:02:06.067 [501/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:06.067 [502/723] Linking target lib/librte_meter.so.24.2 00:02:06.067 [503/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:06.067 [504/723] Linking target lib/librte_pci.so.24.2 00:02:06.067 [505/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:06.067 [506/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:06.067 [507/723] Linking target lib/librte_timer.so.24.2 00:02:06.328 [508/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.328 [509/723] Linking target lib/librte_acl.so.24.2 00:02:06.328 [510/723] Linking target lib/librte_cfgfile.so.24.2 00:02:06.328 [511/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:06.328 [512/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:06.328 [513/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:06.328 [514/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:06.328 [515/723] Linking target lib/librte_dmadev.so.24.2 00:02:06.328 [516/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:06.328 [517/723] Linking target lib/librte_jobstats.so.24.2 00:02:06.328 [518/723] Linking target lib/librte_rawdev.so.24.2 00:02:06.328 [519/723] Linking target lib/librte_rcu.so.24.2 00:02:06.328 [520/723] Linking target lib/librte_stack.so.24.2 00:02:06.328 [521/723] Linking target lib/librte_mempool.so.24.2 00:02:06.328 [522/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:06.328 [523/723] Linking target drivers/librte_bus_vdev.so.24.2 00:02:06.328 [524/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:06.328 [525/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:06.328 [526/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:06.328 [527/723] Linking target drivers/librte_bus_pci.so.24.2 00:02:06.589 [528/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:06.589 [529/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:06.589 [530/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:06.589 [531/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:06.589 [532/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:06.589 [533/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:06.589 [534/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:06.589 [535/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:06.589 [536/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:06.589 [537/723] Linking target lib/librte_mbuf.so.24.2 00:02:06.852 [538/723] Linking target lib/librte_rib.so.24.2 00:02:06.852 [539/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:06.852 [540/723] Linking target drivers/librte_mempool_ring.so.24.2 00:02:06.852 [541/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:06.852 [542/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:06.852 [543/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:06.852 [544/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:06.852 [545/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:06.852 [546/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:06.852 [547/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:07.111 [548/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:07.111 [549/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:07.111 [550/723] Linking target lib/librte_bbdev.so.24.2 00:02:07.111 [551/723] Linking target lib/librte_net.so.24.2 00:02:07.111 [552/723] Linking target lib/librte_compressdev.so.24.2 00:02:07.111 [553/723] Linking target lib/librte_cryptodev.so.24.2 00:02:07.111 [554/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:07.111 [555/723] Linking target lib/librte_distributor.so.24.2 00:02:07.111 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:07.111 [557/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:07.111 [558/723] Linking target lib/librte_gpudev.so.24.2 00:02:07.111 [559/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:07.111 [560/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:07.111 [561/723] Linking target lib/librte_regexdev.so.24.2 00:02:07.111 [562/723] Linking target lib/librte_mldev.so.24.2 00:02:07.111 [563/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:07.111 [564/723] Linking target lib/librte_reorder.so.24.2 00:02:07.111 [565/723] Linking target lib/librte_sched.so.24.2 00:02:07.111 [566/723] Linking target lib/librte_fib.so.24.2 00:02:07.111 [567/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:07.111 [568/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:07.375 [569/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:07.375 [570/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:07.375 [571/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:07.375 [572/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:07.375 [573/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:07.375 [574/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:07.375 [575/723] Linking target lib/librte_cmdline.so.24.2 00:02:07.375 [576/723] Linking target lib/librte_security.so.24.2 00:02:07.375 [577/723] Linking target lib/librte_hash.so.24.2 00:02:07.375 [578/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:07.375 [579/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:07.375 [580/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:07.375 [581/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:07.375 [582/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:07.635 [583/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:07.635 [584/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:07.635 [585/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:07.635 [586/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:07.635 [587/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:07.635 [588/723] Linking target lib/librte_pdcp.so.24.2 00:02:07.635 [589/723] Linking target lib/librte_lpm.so.24.2 00:02:07.635 [590/723] Linking target lib/librte_efd.so.24.2 00:02:07.635 [591/723] Linking target lib/librte_member.so.24.2 00:02:07.895 [592/723] Linking target lib/librte_ipsec.so.24.2 00:02:07.895 [593/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:07.895 [594/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:07.896 [595/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:07.896 [596/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:07.896 [597/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:07.896 [598/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:07.896 [599/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:08.159 [600/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:08.159 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:08.159 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:08.159 [603/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:08.159 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:08.420 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:08.420 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:08.420 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:08.420 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:08.420 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:08.420 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:08.678 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:08.678 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:08.678 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:08.678 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:08.678 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:08.678 [616/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:08.940 [617/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:08.940 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:08.940 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:08.940 [620/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:08.940 [621/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:08.940 [622/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:09.199 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:09.199 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:09.199 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:09.199 [626/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:09.457 [627/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:09.457 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:09.457 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:09.457 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:09.457 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:09.457 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:09.457 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:09.457 [634/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.715 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:09.715 [636/723] Linking target lib/librte_ethdev.so.24.2 00:02:09.715 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:09.715 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:09.715 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:09.715 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:09.715 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:09.715 [642/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:09.715 [643/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:09.974 [644/723] Linking target lib/librte_eventdev.so.24.2 00:02:09.974 [645/723] Linking target lib/librte_pcapng.so.24.2 00:02:09.974 [646/723] Linking target lib/librte_metrics.so.24.2 00:02:09.974 [647/723] Linking target lib/librte_gso.so.24.2 00:02:09.974 [648/723] Linking target lib/librte_bpf.so.24.2 00:02:09.974 [649/723] Linking target lib/librte_gro.so.24.2 00:02:09.974 [650/723] Linking target lib/librte_ip_frag.so.24.2 00:02:09.974 [651/723] Linking target lib/librte_power.so.24.2 00:02:09.974 [652/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:09.974 [653/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:09.974 [654/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:09.974 [655/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:09.974 [656/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:09.974 [657/723] Linking target lib/librte_dispatcher.so.24.2 00:02:09.974 [658/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:09.974 [659/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:09.974 [660/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:09.974 [661/723] Linking target lib/librte_port.so.24.2 00:02:10.233 [662/723] Linking target lib/librte_pdump.so.24.2 00:02:10.233 [663/723] Linking target lib/librte_graph.so.24.2 00:02:10.233 [664/723] Linking target lib/librte_bitratestats.so.24.2 00:02:10.233 [665/723] Linking target lib/librte_latencystats.so.24.2 00:02:10.233 [666/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:10.233 [667/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:10.233 [668/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:10.233 [669/723] Linking target lib/librte_table.so.24.2 00:02:10.233 [670/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:10.233 [671/723] Linking target lib/librte_node.so.24.2 00:02:10.491 [672/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:10.491 [673/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:10.491 [674/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:10.491 [675/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:10.491 [676/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:10.491 [677/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.749 [678/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:11.007 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:11.007 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:11.265 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:11.265 [682/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:11.556 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:11.556 [684/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:11.556 [685/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:11.814 [686/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:11.814 [687/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.814 [688/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.814 [689/723] Linking static target drivers/librte_net_i40e.a 00:02:12.072 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:12.331 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.589 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:02:13.155 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:13.413 [694/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:13.413 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:21.520 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:21.520 [697/723] Linking static target lib/librte_pipeline.a 00:02:21.778 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.778 [699/723] Linking static target lib/librte_vhost.a 00:02:22.345 [700/723] Linking target app/dpdk-test-dma-perf 00:02:22.345 [701/723] Linking target app/dpdk-test-fib 00:02:22.345 [702/723] Linking target app/dpdk-test-sad 00:02:22.345 [703/723] Linking target app/dpdk-test-acl 00:02:22.345 [704/723] Linking target app/dpdk-test-security-perf 00:02:22.345 [705/723] Linking target app/dpdk-test-gpudev 00:02:22.345 [706/723] Linking target app/dpdk-pdump 00:02:22.345 [707/723] Linking target app/dpdk-test-flow-perf 00:02:22.345 [708/723] Linking target app/dpdk-test-pipeline 00:02:22.345 [709/723] Linking target app/dpdk-test-mldev 00:02:22.345 [710/723] Linking target app/dpdk-test-compress-perf 00:02:22.345 [711/723] Linking target app/dpdk-test-regex 00:02:22.345 [712/723] Linking target app/dpdk-test-eventdev 00:02:22.345 [713/723] Linking target app/dpdk-dumpcap 00:02:22.345 [714/723] Linking target app/dpdk-graph 00:02:22.345 [715/723] Linking target app/dpdk-test-cmdline 00:02:22.345 [716/723] Linking target app/dpdk-test-bbdev 00:02:22.345 [717/723] Linking target app/dpdk-test-crypto-perf 00:02:22.345 [718/723] Linking target app/dpdk-proc-info 00:02:22.603 [719/723] Linking target app/dpdk-testpmd 00:02:22.861 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.861 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:23.793 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.793 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:23.793 03:45:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:23.793 03:45:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:23.793 03:45:39 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:24.051 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:24.052 [0/1] Installing files. 00:02:24.313 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:24.313 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.319 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.319 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.320 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.320 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.320 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.320 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:24.889 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:24.889 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:24.889 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.889 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:24.889 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.889 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:24.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:24.893 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:24.893 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:24.893 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:24.893 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:24.893 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:24.893 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:24.893 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:24.893 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:24.893 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:24.893 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:24.893 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:24.893 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:24.893 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:24.893 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:24.893 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:24.893 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:24.893 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:24.893 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:24.893 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:24.893 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:24.893 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:24.893 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:24.893 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:24.893 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:24.893 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:24.893 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:24.893 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:24.893 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:24.893 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:24.893 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:24.893 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:24.893 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:24.893 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:24.893 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:24.893 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:24.893 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:24.893 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:24.893 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:24.893 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:24.893 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:24.893 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:24.893 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:24.893 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:24.893 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:24.893 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:24.893 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:24.893 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:24.893 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:24.893 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:24.893 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:24.893 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:24.893 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:24.893 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:24.893 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:24.893 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:24.893 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:24.893 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:24.893 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:24.893 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:24.893 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:24.893 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:24.893 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:24.893 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:24.893 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:24.893 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:24.893 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:24.893 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:24.893 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:24.893 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:24.893 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:24.893 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:24.893 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:24.893 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:24.893 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:24.893 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:24.893 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:24.893 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:24.893 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:24.893 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:24.893 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:24.893 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:24.893 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:24.893 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:24.893 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:24.893 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:24.893 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:24.893 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:24.893 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:24.893 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:24.893 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:24.893 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:24.893 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:24.893 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:24.893 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:24.893 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:24.893 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:24.893 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:24.893 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:24.893 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:24.893 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:24.893 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:24.893 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:24.893 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:24.893 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:24.893 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:24.893 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:24.893 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:24.894 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:24.894 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:24.894 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:24.894 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:24.894 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:24.894 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:24.894 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:24.894 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:24.894 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:24.894 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:24.894 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:24.894 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:24.894 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:24.894 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:24.894 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:24.894 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:24.894 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:24.894 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:24.894 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:24.894 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:24.894 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:24.894 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:24.894 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:24.894 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:24.894 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:24.894 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:24.894 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:24.894 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:24.894 03:45:40 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:24.894 03:45:40 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.894 00:02:24.894 real 0m39.790s 00:02:24.894 user 13m55.922s 00:02:24.894 sys 2m0.682s 00:02:24.894 03:45:40 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.894 03:45:40 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:24.894 ************************************ 00:02:24.894 END TEST build_native_dpdk 00:02:24.894 ************************************ 00:02:24.894 03:45:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.894 03:45:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.894 03:45:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:24.894 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:25.152 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.152 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.152 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:25.409 Using 'verbs' RDMA provider 00:02:35.937 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:44.076 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:44.334 Creating mk/config.mk...done. 00:02:44.334 Creating mk/cc.flags.mk...done. 00:02:44.334 Type 'make' to build. 00:02:44.334 03:45:59 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:44.334 03:45:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:44.334 03:45:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:44.334 03:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.334 ************************************ 00:02:44.334 START TEST make 00:02:44.334 ************************************ 00:02:44.334 03:45:59 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:44.591 make[1]: Nothing to be done for 'all'. 00:02:46.500 The Meson build system 00:02:46.500 Version: 1.3.1 00:02:46.500 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:46.500 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:46.500 Build type: native build 00:02:46.500 Project name: libvfio-user 00:02:46.500 Project version: 0.0.1 00:02:46.500 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:46.500 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:46.500 Host machine cpu family: x86_64 00:02:46.500 Host machine cpu: x86_64 00:02:46.500 Run-time dependency threads found: YES 00:02:46.500 Library dl found: YES 00:02:46.500 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:46.500 Run-time dependency json-c found: YES 0.17 00:02:46.500 Run-time dependency cmocka found: YES 1.1.7 00:02:46.500 Program pytest-3 found: NO 00:02:46.500 Program flake8 found: NO 00:02:46.500 Program misspell-fixer found: NO 00:02:46.500 Program restructuredtext-lint found: NO 00:02:46.500 Program valgrind found: YES (/usr/bin/valgrind) 00:02:46.500 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.500 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.500 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.500 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:46.500 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:46.500 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:46.500 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:46.500 Build targets in project: 8 00:02:46.500 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:46.500 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:46.500 00:02:46.500 libvfio-user 0.0.1 00:02:46.500 00:02:46.500 User defined options 00:02:46.500 buildtype : debug 00:02:46.500 default_library: shared 00:02:46.500 libdir : /usr/local/lib 00:02:46.500 00:02:46.500 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.763 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:47.036 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:47.036 [2/37] Compiling C object samples/null.p/null.c.o 00:02:47.036 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:47.036 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:47.036 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:47.036 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:47.036 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:47.036 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:47.300 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:47.300 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:47.300 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:47.300 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:47.300 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:47.300 [14/37] Compiling C object samples/server.p/server.c.o 00:02:47.300 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:47.300 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:47.300 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:47.300 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:47.300 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:47.300 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:47.300 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:47.300 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:47.300 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:47.300 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:47.300 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:47.300 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:47.300 [27/37] Compiling C object samples/client.p/client.c.o 00:02:47.558 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:47.558 [29/37] Linking target samples/client 00:02:47.558 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:47.558 [31/37] Linking target test/unit_tests 00:02:47.558 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:47.821 [33/37] Linking target samples/null 00:02:47.821 [34/37] Linking target samples/server 00:02:47.821 [35/37] Linking target samples/gpio-pci-idio-16 00:02:47.821 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:47.821 [37/37] Linking target samples/lspci 00:02:47.821 INFO: autodetecting backend as ninja 00:02:47.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:47.821 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:48.391 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:48.391 ninja: no work to do. 00:03:00.588 CC lib/ut_mock/mock.o 00:03:00.588 CC lib/log/log.o 00:03:00.588 CC lib/log/log_flags.o 00:03:00.588 CC lib/log/log_deprecated.o 00:03:00.588 CC lib/ut/ut.o 00:03:00.588 LIB libspdk_log.a 00:03:00.588 LIB libspdk_ut.a 00:03:00.588 LIB libspdk_ut_mock.a 00:03:00.588 SO libspdk_ut_mock.so.6.0 00:03:00.588 SO libspdk_ut.so.2.0 00:03:00.588 SO libspdk_log.so.7.0 00:03:00.588 SYMLINK libspdk_ut_mock.so 00:03:00.588 SYMLINK libspdk_ut.so 00:03:00.588 SYMLINK libspdk_log.so 00:03:00.588 CC lib/ioat/ioat.o 00:03:00.588 CC lib/util/base64.o 00:03:00.588 CXX lib/trace_parser/trace.o 00:03:00.588 CC lib/util/bit_array.o 00:03:00.588 CC lib/dma/dma.o 00:03:00.588 CC lib/util/cpuset.o 00:03:00.588 CC lib/util/crc16.o 00:03:00.588 CC lib/util/crc32.o 00:03:00.588 CC lib/util/crc32c.o 00:03:00.588 CC lib/util/crc32_ieee.o 00:03:00.588 CC lib/util/crc64.o 00:03:00.588 CC lib/util/dif.o 00:03:00.588 CC lib/util/fd.o 00:03:00.588 CC lib/util/fd_group.o 00:03:00.588 CC lib/util/file.o 00:03:00.588 CC lib/util/hexlify.o 00:03:00.588 CC lib/util/iov.o 00:03:00.588 CC lib/util/math.o 00:03:00.588 CC lib/util/net.o 00:03:00.588 CC lib/util/pipe.o 00:03:00.588 CC lib/util/strerror_tls.o 00:03:00.588 CC lib/util/string.o 00:03:00.588 CC lib/util/uuid.o 00:03:00.588 CC lib/util/xor.o 00:03:00.588 CC lib/util/zipf.o 00:03:00.588 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.588 CC lib/vfio_user/host/vfio_user.o 00:03:00.588 LIB libspdk_dma.a 00:03:00.588 SO libspdk_dma.so.4.0 00:03:00.588 SYMLINK libspdk_dma.so 00:03:00.588 LIB libspdk_ioat.a 00:03:00.588 SO libspdk_ioat.so.7.0 00:03:00.588 LIB libspdk_vfio_user.a 00:03:00.588 SYMLINK libspdk_ioat.so 00:03:00.588 SO libspdk_vfio_user.so.5.0 00:03:00.846 SYMLINK libspdk_vfio_user.so 00:03:00.846 LIB libspdk_util.a 00:03:00.846 SO libspdk_util.so.10.0 00:03:01.104 SYMLINK libspdk_util.so 00:03:01.104 CC lib/rdma_utils/rdma_utils.o 00:03:01.104 CC lib/idxd/idxd.o 00:03:01.104 CC lib/json/json_parse.o 00:03:01.104 CC lib/vmd/vmd.o 00:03:01.104 CC lib/rdma_provider/common.o 00:03:01.104 CC lib/idxd/idxd_user.o 00:03:01.104 CC lib/json/json_util.o 00:03:01.104 CC lib/conf/conf.o 00:03:01.104 CC lib/env_dpdk/env.o 00:03:01.104 CC lib/vmd/led.o 00:03:01.104 CC lib/idxd/idxd_kernel.o 00:03:01.104 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.104 CC lib/json/json_write.o 00:03:01.104 CC lib/env_dpdk/memory.o 00:03:01.104 CC lib/env_dpdk/pci.o 00:03:01.104 CC lib/env_dpdk/init.o 00:03:01.104 CC lib/env_dpdk/threads.o 00:03:01.104 CC lib/env_dpdk/pci_ioat.o 00:03:01.104 CC lib/env_dpdk/pci_virtio.o 00:03:01.104 CC lib/env_dpdk/pci_vmd.o 00:03:01.104 CC lib/env_dpdk/pci_idxd.o 00:03:01.104 CC lib/env_dpdk/pci_event.o 00:03:01.104 CC lib/env_dpdk/sigbus_handler.o 00:03:01.104 CC lib/env_dpdk/pci_dpdk.o 00:03:01.104 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:01.104 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.104 LIB libspdk_trace_parser.a 00:03:01.363 SO libspdk_trace_parser.so.5.0 00:03:01.363 SYMLINK libspdk_trace_parser.so 00:03:01.363 LIB libspdk_rdma_provider.a 00:03:01.363 LIB libspdk_conf.a 00:03:01.363 SO libspdk_rdma_provider.so.6.0 00:03:01.621 SO libspdk_conf.so.6.0 00:03:01.621 LIB libspdk_rdma_utils.a 00:03:01.621 SYMLINK libspdk_rdma_provider.so 00:03:01.621 SO libspdk_rdma_utils.so.1.0 00:03:01.621 SYMLINK libspdk_conf.so 00:03:01.621 LIB libspdk_json.a 00:03:01.621 SO libspdk_json.so.6.0 00:03:01.621 SYMLINK libspdk_rdma_utils.so 00:03:01.621 SYMLINK libspdk_json.so 00:03:01.878 LIB libspdk_idxd.a 00:03:01.878 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.878 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.878 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.878 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.878 SO libspdk_idxd.so.12.0 00:03:01.878 LIB libspdk_vmd.a 00:03:01.878 SYMLINK libspdk_idxd.so 00:03:01.878 SO libspdk_vmd.so.6.0 00:03:01.878 SYMLINK libspdk_vmd.so 00:03:02.136 LIB libspdk_jsonrpc.a 00:03:02.136 SO libspdk_jsonrpc.so.6.0 00:03:02.136 SYMLINK libspdk_jsonrpc.so 00:03:02.394 CC lib/rpc/rpc.o 00:03:02.652 LIB libspdk_rpc.a 00:03:02.652 SO libspdk_rpc.so.6.0 00:03:02.652 SYMLINK libspdk_rpc.so 00:03:02.652 LIB libspdk_env_dpdk.a 00:03:02.652 SO libspdk_env_dpdk.so.15.0 00:03:02.652 CC lib/trace/trace.o 00:03:02.652 CC lib/trace/trace_flags.o 00:03:02.652 CC lib/trace/trace_rpc.o 00:03:02.652 CC lib/keyring/keyring.o 00:03:02.652 CC lib/keyring/keyring_rpc.o 00:03:02.652 CC lib/notify/notify.o 00:03:02.652 CC lib/notify/notify_rpc.o 00:03:02.909 SYMLINK libspdk_env_dpdk.so 00:03:02.909 LIB libspdk_notify.a 00:03:02.909 SO libspdk_notify.so.6.0 00:03:02.909 LIB libspdk_keyring.a 00:03:02.909 SYMLINK libspdk_notify.so 00:03:02.909 LIB libspdk_trace.a 00:03:03.166 SO libspdk_keyring.so.1.0 00:03:03.166 SO libspdk_trace.so.10.0 00:03:03.166 SYMLINK libspdk_keyring.so 00:03:03.166 SYMLINK libspdk_trace.so 00:03:03.166 CC lib/thread/thread.o 00:03:03.166 CC lib/thread/iobuf.o 00:03:03.423 CC lib/sock/sock.o 00:03:03.423 CC lib/sock/sock_rpc.o 00:03:03.680 LIB libspdk_sock.a 00:03:03.680 SO libspdk_sock.so.10.0 00:03:03.680 SYMLINK libspdk_sock.so 00:03:03.937 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.937 CC lib/nvme/nvme_ctrlr.o 00:03:03.937 CC lib/nvme/nvme_fabric.o 00:03:03.937 CC lib/nvme/nvme_ns_cmd.o 00:03:03.937 CC lib/nvme/nvme_ns.o 00:03:03.937 CC lib/nvme/nvme_pcie_common.o 00:03:03.937 CC lib/nvme/nvme_pcie.o 00:03:03.937 CC lib/nvme/nvme_qpair.o 00:03:03.937 CC lib/nvme/nvme.o 00:03:03.937 CC lib/nvme/nvme_quirks.o 00:03:03.937 CC lib/nvme/nvme_transport.o 00:03:03.937 CC lib/nvme/nvme_discovery.o 00:03:03.937 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.937 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.937 CC lib/nvme/nvme_tcp.o 00:03:03.937 CC lib/nvme/nvme_opal.o 00:03:03.937 CC lib/nvme/nvme_io_msg.o 00:03:03.937 CC lib/nvme/nvme_poll_group.o 00:03:03.937 CC lib/nvme/nvme_zns.o 00:03:03.937 CC lib/nvme/nvme_stubs.o 00:03:03.937 CC lib/nvme/nvme_auth.o 00:03:03.937 CC lib/nvme/nvme_cuse.o 00:03:03.937 CC lib/nvme/nvme_vfio_user.o 00:03:03.937 CC lib/nvme/nvme_rdma.o 00:03:04.870 LIB libspdk_thread.a 00:03:04.870 SO libspdk_thread.so.10.1 00:03:04.870 SYMLINK libspdk_thread.so 00:03:05.128 CC lib/blob/blobstore.o 00:03:05.128 CC lib/accel/accel.o 00:03:05.128 CC lib/accel/accel_rpc.o 00:03:05.128 CC lib/virtio/virtio.o 00:03:05.128 CC lib/blob/zeroes.o 00:03:05.128 CC lib/init/subsystem.o 00:03:05.128 CC lib/accel/accel_sw.o 00:03:05.128 CC lib/blob/request.o 00:03:05.128 CC lib/virtio/virtio_vhost_user.o 00:03:05.128 CC lib/init/json_config.o 00:03:05.128 CC lib/blob/blob_bs_dev.o 00:03:05.128 CC lib/vfu_tgt/tgt_endpoint.o 00:03:05.128 CC lib/vfu_tgt/tgt_rpc.o 00:03:05.128 CC lib/virtio/virtio_vfio_user.o 00:03:05.128 CC lib/init/subsystem_rpc.o 00:03:05.128 CC lib/init/rpc.o 00:03:05.128 CC lib/virtio/virtio_pci.o 00:03:05.386 LIB libspdk_init.a 00:03:05.386 SO libspdk_init.so.5.0 00:03:05.386 LIB libspdk_virtio.a 00:03:05.386 LIB libspdk_vfu_tgt.a 00:03:05.386 SYMLINK libspdk_init.so 00:03:05.386 SO libspdk_vfu_tgt.so.3.0 00:03:05.386 SO libspdk_virtio.so.7.0 00:03:05.643 SYMLINK libspdk_vfu_tgt.so 00:03:05.643 SYMLINK libspdk_virtio.so 00:03:05.643 CC lib/event/app.o 00:03:05.643 CC lib/event/reactor.o 00:03:05.643 CC lib/event/log_rpc.o 00:03:05.643 CC lib/event/app_rpc.o 00:03:05.643 CC lib/event/scheduler_static.o 00:03:06.207 LIB libspdk_event.a 00:03:06.207 SO libspdk_event.so.14.0 00:03:06.207 LIB libspdk_accel.a 00:03:06.207 SYMLINK libspdk_event.so 00:03:06.207 SO libspdk_accel.so.16.0 00:03:06.207 SYMLINK libspdk_accel.so 00:03:06.207 LIB libspdk_nvme.a 00:03:06.465 CC lib/bdev/bdev.o 00:03:06.465 CC lib/bdev/bdev_rpc.o 00:03:06.465 CC lib/bdev/bdev_zone.o 00:03:06.465 CC lib/bdev/part.o 00:03:06.465 CC lib/bdev/scsi_nvme.o 00:03:06.465 SO libspdk_nvme.so.13.1 00:03:06.742 SYMLINK libspdk_nvme.so 00:03:08.116 LIB libspdk_blob.a 00:03:08.116 SO libspdk_blob.so.11.0 00:03:08.116 SYMLINK libspdk_blob.so 00:03:08.374 CC lib/blobfs/blobfs.o 00:03:08.374 CC lib/blobfs/tree.o 00:03:08.374 CC lib/lvol/lvol.o 00:03:08.939 LIB libspdk_bdev.a 00:03:08.939 SO libspdk_bdev.so.16.0 00:03:08.939 SYMLINK libspdk_bdev.so 00:03:09.207 LIB libspdk_blobfs.a 00:03:09.207 SO libspdk_blobfs.so.10.0 00:03:09.207 SYMLINK libspdk_blobfs.so 00:03:09.207 LIB libspdk_lvol.a 00:03:09.207 CC lib/scsi/dev.o 00:03:09.207 CC lib/ublk/ublk.o 00:03:09.207 CC lib/scsi/lun.o 00:03:09.207 CC lib/ftl/ftl_core.o 00:03:09.207 CC lib/ublk/ublk_rpc.o 00:03:09.207 CC lib/scsi/port.o 00:03:09.207 CC lib/ftl/ftl_init.o 00:03:09.207 CC lib/scsi/scsi.o 00:03:09.207 CC lib/nbd/nbd.o 00:03:09.207 CC lib/ftl/ftl_layout.o 00:03:09.207 CC lib/scsi/scsi_bdev.o 00:03:09.207 CC lib/nbd/nbd_rpc.o 00:03:09.207 CC lib/nvmf/ctrlr.o 00:03:09.207 CC lib/scsi/scsi_pr.o 00:03:09.207 CC lib/ftl/ftl_debug.o 00:03:09.207 CC lib/scsi/scsi_rpc.o 00:03:09.207 CC lib/ftl/ftl_io.o 00:03:09.207 CC lib/scsi/task.o 00:03:09.207 CC lib/ftl/ftl_sb.o 00:03:09.207 CC lib/nvmf/ctrlr_bdev.o 00:03:09.207 CC lib/nvmf/ctrlr_discovery.o 00:03:09.207 CC lib/nvmf/subsystem.o 00:03:09.207 CC lib/ftl/ftl_l2p.o 00:03:09.207 CC lib/ftl/ftl_l2p_flat.o 00:03:09.207 CC lib/nvmf/nvmf.o 00:03:09.207 CC lib/nvmf/nvmf_rpc.o 00:03:09.207 CC lib/ftl/ftl_band.o 00:03:09.207 CC lib/ftl/ftl_nv_cache.o 00:03:09.207 SO libspdk_lvol.so.10.0 00:03:09.207 CC lib/nvmf/transport.o 00:03:09.207 CC lib/ftl/ftl_band_ops.o 00:03:09.207 CC lib/nvmf/tcp.o 00:03:09.207 CC lib/nvmf/stubs.o 00:03:09.207 CC lib/ftl/ftl_writer.o 00:03:09.207 CC lib/ftl/ftl_rq.o 00:03:09.207 CC lib/nvmf/mdns_server.o 00:03:09.207 CC lib/ftl/ftl_reloc.o 00:03:09.207 CC lib/nvmf/vfio_user.o 00:03:09.207 CC lib/ftl/ftl_l2p_cache.o 00:03:09.207 CC lib/nvmf/rdma.o 00:03:09.207 CC lib/ftl/ftl_p2l.o 00:03:09.207 CC lib/nvmf/auth.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.207 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.207 SYMLINK libspdk_lvol.so 00:03:09.469 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.727 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.727 CC lib/ftl/utils/ftl_conf.o 00:03:09.727 CC lib/ftl/utils/ftl_md.o 00:03:09.727 CC lib/ftl/utils/ftl_mempool.o 00:03:09.727 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.727 CC lib/ftl/utils/ftl_property.o 00:03:09.727 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.727 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.727 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.727 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.986 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.986 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.986 CC lib/ftl/base/ftl_base_dev.o 00:03:09.986 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.986 CC lib/ftl/ftl_trace.o 00:03:09.986 LIB libspdk_nbd.a 00:03:09.986 SO libspdk_nbd.so.7.0 00:03:10.242 SYMLINK libspdk_nbd.so 00:03:10.242 LIB libspdk_scsi.a 00:03:10.242 SO libspdk_scsi.so.9.0 00:03:10.242 LIB libspdk_ublk.a 00:03:10.242 SO libspdk_ublk.so.3.0 00:03:10.242 SYMLINK libspdk_scsi.so 00:03:10.500 SYMLINK libspdk_ublk.so 00:03:10.500 CC lib/iscsi/conn.o 00:03:10.500 CC lib/vhost/vhost.o 00:03:10.500 CC lib/iscsi/init_grp.o 00:03:10.500 CC lib/vhost/vhost_rpc.o 00:03:10.500 CC lib/iscsi/iscsi.o 00:03:10.500 CC lib/vhost/vhost_scsi.o 00:03:10.500 CC lib/vhost/vhost_blk.o 00:03:10.500 CC lib/iscsi/md5.o 00:03:10.500 CC lib/vhost/rte_vhost_user.o 00:03:10.500 CC lib/iscsi/param.o 00:03:10.500 CC lib/iscsi/portal_grp.o 00:03:10.500 CC lib/iscsi/tgt_node.o 00:03:10.500 CC lib/iscsi/iscsi_subsystem.o 00:03:10.500 CC lib/iscsi/iscsi_rpc.o 00:03:10.500 CC lib/iscsi/task.o 00:03:10.761 LIB libspdk_ftl.a 00:03:10.761 SO libspdk_ftl.so.9.0 00:03:11.326 SYMLINK libspdk_ftl.so 00:03:11.584 LIB libspdk_vhost.a 00:03:11.841 SO libspdk_vhost.so.8.0 00:03:11.841 LIB libspdk_nvmf.a 00:03:11.841 SYMLINK libspdk_vhost.so 00:03:11.841 SO libspdk_nvmf.so.19.0 00:03:11.841 LIB libspdk_iscsi.a 00:03:12.098 SO libspdk_iscsi.so.8.0 00:03:12.098 SYMLINK libspdk_nvmf.so 00:03:12.098 SYMLINK libspdk_iscsi.so 00:03:12.355 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.355 CC module/vfu_device/vfu_virtio.o 00:03:12.355 CC module/vfu_device/vfu_virtio_blk.o 00:03:12.355 CC module/vfu_device/vfu_virtio_scsi.o 00:03:12.355 CC module/vfu_device/vfu_virtio_rpc.o 00:03:12.613 CC module/sock/posix/posix.o 00:03:12.613 CC module/accel/error/accel_error.o 00:03:12.613 CC module/accel/error/accel_error_rpc.o 00:03:12.613 CC module/keyring/linux/keyring.o 00:03:12.613 CC module/accel/iaa/accel_iaa.o 00:03:12.613 CC module/keyring/linux/keyring_rpc.o 00:03:12.613 CC module/blob/bdev/blob_bdev.o 00:03:12.613 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.613 CC module/keyring/file/keyring.o 00:03:12.613 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.613 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.613 CC module/keyring/file/keyring_rpc.o 00:03:12.613 CC module/accel/dsa/accel_dsa.o 00:03:12.613 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.613 CC module/accel/ioat/accel_ioat.o 00:03:12.613 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.613 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.613 LIB libspdk_env_dpdk_rpc.a 00:03:12.613 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.613 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.613 LIB libspdk_keyring_linux.a 00:03:12.613 LIB libspdk_scheduler_gscheduler.a 00:03:12.613 LIB libspdk_keyring_file.a 00:03:12.613 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.613 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.613 SO libspdk_keyring_linux.so.1.0 00:03:12.613 SO libspdk_keyring_file.so.1.0 00:03:12.613 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.613 LIB libspdk_accel_error.a 00:03:12.613 LIB libspdk_accel_ioat.a 00:03:12.870 LIB libspdk_scheduler_dynamic.a 00:03:12.870 LIB libspdk_accel_iaa.a 00:03:12.870 SO libspdk_accel_error.so.2.0 00:03:12.870 SO libspdk_accel_ioat.so.6.0 00:03:12.870 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.870 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.870 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.870 SYMLINK libspdk_keyring_file.so 00:03:12.870 SYMLINK libspdk_keyring_linux.so 00:03:12.870 SO libspdk_accel_iaa.so.3.0 00:03:12.870 LIB libspdk_accel_dsa.a 00:03:12.870 SYMLINK libspdk_accel_error.so 00:03:12.870 SYMLINK libspdk_accel_ioat.so 00:03:12.870 LIB libspdk_blob_bdev.a 00:03:12.870 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.870 SO libspdk_accel_dsa.so.5.0 00:03:12.870 SYMLINK libspdk_accel_iaa.so 00:03:12.870 SO libspdk_blob_bdev.so.11.0 00:03:12.870 SYMLINK libspdk_accel_dsa.so 00:03:12.870 SYMLINK libspdk_blob_bdev.so 00:03:13.128 LIB libspdk_vfu_device.a 00:03:13.128 SO libspdk_vfu_device.so.3.0 00:03:13.128 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.128 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.128 CC module/bdev/nvme/bdev_nvme.o 00:03:13.129 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.129 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.129 CC module/bdev/error/vbdev_error.o 00:03:13.129 CC module/bdev/delay/vbdev_delay.o 00:03:13.129 CC module/bdev/gpt/gpt.o 00:03:13.129 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:13.129 CC module/bdev/malloc/bdev_malloc.o 00:03:13.129 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.129 CC module/bdev/raid/bdev_raid.o 00:03:13.129 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.129 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.129 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.129 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.129 CC module/bdev/null/bdev_null.o 00:03:13.129 CC module/bdev/nvme/nvme_rpc.o 00:03:13.129 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.129 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.129 CC module/bdev/null/bdev_null_rpc.o 00:03:13.129 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.129 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.129 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.129 CC module/bdev/ftl/bdev_ftl.o 00:03:13.129 CC module/bdev/nvme/vbdev_opal.o 00:03:13.129 CC module/bdev/aio/bdev_aio.o 00:03:13.129 CC module/bdev/raid/raid0.o 00:03:13.129 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.129 CC module/bdev/raid/raid1.o 00:03:13.129 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.129 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.129 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.129 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.129 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.129 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.129 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.129 CC module/bdev/raid/concat.o 00:03:13.129 CC module/bdev/split/vbdev_split.o 00:03:13.129 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.129 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.129 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:13.386 SYMLINK libspdk_vfu_device.so 00:03:13.386 LIB libspdk_sock_posix.a 00:03:13.386 SO libspdk_sock_posix.so.6.0 00:03:13.644 LIB libspdk_blobfs_bdev.a 00:03:13.644 SYMLINK libspdk_sock_posix.so 00:03:13.644 SO libspdk_blobfs_bdev.so.6.0 00:03:13.644 LIB libspdk_bdev_split.a 00:03:13.644 SO libspdk_bdev_split.so.6.0 00:03:13.644 SYMLINK libspdk_blobfs_bdev.so 00:03:13.644 LIB libspdk_bdev_gpt.a 00:03:13.644 LIB libspdk_bdev_error.a 00:03:13.644 SO libspdk_bdev_gpt.so.6.0 00:03:13.644 SYMLINK libspdk_bdev_split.so 00:03:13.644 SO libspdk_bdev_error.so.6.0 00:03:13.644 LIB libspdk_bdev_ftl.a 00:03:13.644 LIB libspdk_bdev_null.a 00:03:13.644 LIB libspdk_bdev_aio.a 00:03:13.644 SO libspdk_bdev_ftl.so.6.0 00:03:13.644 SO libspdk_bdev_null.so.6.0 00:03:13.644 SYMLINK libspdk_bdev_gpt.so 00:03:13.644 SYMLINK libspdk_bdev_error.so 00:03:13.644 SO libspdk_bdev_aio.so.6.0 00:03:13.644 LIB libspdk_bdev_passthru.a 00:03:13.644 LIB libspdk_bdev_iscsi.a 00:03:13.644 LIB libspdk_bdev_zone_block.a 00:03:13.644 SYMLINK libspdk_bdev_ftl.so 00:03:13.644 SYMLINK libspdk_bdev_null.so 00:03:13.644 SO libspdk_bdev_passthru.so.6.0 00:03:13.644 SO libspdk_bdev_iscsi.so.6.0 00:03:13.902 SO libspdk_bdev_zone_block.so.6.0 00:03:13.902 LIB libspdk_bdev_malloc.a 00:03:13.902 SYMLINK libspdk_bdev_aio.so 00:03:13.902 SO libspdk_bdev_malloc.so.6.0 00:03:13.902 LIB libspdk_bdev_delay.a 00:03:13.902 SYMLINK libspdk_bdev_passthru.so 00:03:13.902 SYMLINK libspdk_bdev_iscsi.so 00:03:13.902 SYMLINK libspdk_bdev_zone_block.so 00:03:13.902 SO libspdk_bdev_delay.so.6.0 00:03:13.902 SYMLINK libspdk_bdev_malloc.so 00:03:13.902 LIB libspdk_bdev_lvol.a 00:03:13.902 SYMLINK libspdk_bdev_delay.so 00:03:13.902 SO libspdk_bdev_lvol.so.6.0 00:03:13.902 LIB libspdk_bdev_virtio.a 00:03:13.902 SYMLINK libspdk_bdev_lvol.so 00:03:13.902 SO libspdk_bdev_virtio.so.6.0 00:03:14.160 SYMLINK libspdk_bdev_virtio.so 00:03:14.418 LIB libspdk_bdev_raid.a 00:03:14.418 SO libspdk_bdev_raid.so.6.0 00:03:14.418 SYMLINK libspdk_bdev_raid.so 00:03:15.350 LIB libspdk_bdev_nvme.a 00:03:15.608 SO libspdk_bdev_nvme.so.7.0 00:03:15.608 SYMLINK libspdk_bdev_nvme.so 00:03:15.866 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.866 CC module/event/subsystems/vmd/vmd.o 00:03:15.866 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.866 CC module/event/subsystems/sock/sock.o 00:03:15.866 CC module/event/subsystems/keyring/keyring.o 00:03:15.866 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.866 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.866 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.866 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.124 LIB libspdk_event_keyring.a 00:03:16.124 LIB libspdk_event_sock.a 00:03:16.124 LIB libspdk_event_vhost_blk.a 00:03:16.124 LIB libspdk_event_scheduler.a 00:03:16.124 LIB libspdk_event_vfu_tgt.a 00:03:16.124 LIB libspdk_event_vmd.a 00:03:16.124 LIB libspdk_event_iobuf.a 00:03:16.124 SO libspdk_event_keyring.so.1.0 00:03:16.124 SO libspdk_event_vhost_blk.so.3.0 00:03:16.124 SO libspdk_event_sock.so.5.0 00:03:16.124 SO libspdk_event_scheduler.so.4.0 00:03:16.124 SO libspdk_event_vfu_tgt.so.3.0 00:03:16.124 SO libspdk_event_vmd.so.6.0 00:03:16.124 SO libspdk_event_iobuf.so.3.0 00:03:16.124 SYMLINK libspdk_event_keyring.so 00:03:16.124 SYMLINK libspdk_event_vhost_blk.so 00:03:16.124 SYMLINK libspdk_event_sock.so 00:03:16.124 SYMLINK libspdk_event_vfu_tgt.so 00:03:16.124 SYMLINK libspdk_event_scheduler.so 00:03:16.124 SYMLINK libspdk_event_vmd.so 00:03:16.124 SYMLINK libspdk_event_iobuf.so 00:03:16.382 CC module/event/subsystems/accel/accel.o 00:03:16.640 LIB libspdk_event_accel.a 00:03:16.640 SO libspdk_event_accel.so.6.0 00:03:16.640 SYMLINK libspdk_event_accel.so 00:03:16.898 CC module/event/subsystems/bdev/bdev.o 00:03:16.898 LIB libspdk_event_bdev.a 00:03:17.164 SO libspdk_event_bdev.so.6.0 00:03:17.164 SYMLINK libspdk_event_bdev.so 00:03:17.164 CC module/event/subsystems/nbd/nbd.o 00:03:17.164 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.164 CC module/event/subsystems/ublk/ublk.o 00:03:17.164 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.164 CC module/event/subsystems/scsi/scsi.o 00:03:17.425 LIB libspdk_event_nbd.a 00:03:17.425 LIB libspdk_event_ublk.a 00:03:17.425 LIB libspdk_event_scsi.a 00:03:17.425 SO libspdk_event_nbd.so.6.0 00:03:17.425 SO libspdk_event_ublk.so.3.0 00:03:17.425 SO libspdk_event_scsi.so.6.0 00:03:17.425 SYMLINK libspdk_event_nbd.so 00:03:17.425 SYMLINK libspdk_event_ublk.so 00:03:17.425 SYMLINK libspdk_event_scsi.so 00:03:17.425 LIB libspdk_event_nvmf.a 00:03:17.684 SO libspdk_event_nvmf.so.6.0 00:03:17.684 SYMLINK libspdk_event_nvmf.so 00:03:17.684 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.684 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.684 LIB libspdk_event_vhost_scsi.a 00:03:17.684 LIB libspdk_event_iscsi.a 00:03:17.941 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.941 SO libspdk_event_iscsi.so.6.0 00:03:17.941 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.941 SYMLINK libspdk_event_iscsi.so 00:03:17.941 SO libspdk.so.6.0 00:03:17.941 SYMLINK libspdk.so 00:03:18.216 TEST_HEADER include/spdk/accel.h 00:03:18.216 TEST_HEADER include/spdk/accel_module.h 00:03:18.216 TEST_HEADER include/spdk/assert.h 00:03:18.216 TEST_HEADER include/spdk/barrier.h 00:03:18.216 TEST_HEADER include/spdk/base64.h 00:03:18.216 CC test/rpc_client/rpc_client_test.o 00:03:18.216 TEST_HEADER include/spdk/bdev.h 00:03:18.216 TEST_HEADER include/spdk/bdev_module.h 00:03:18.216 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.216 CC app/spdk_top/spdk_top.o 00:03:18.216 TEST_HEADER include/spdk/bit_array.h 00:03:18.216 CXX app/trace/trace.o 00:03:18.216 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.216 TEST_HEADER include/spdk/bit_pool.h 00:03:18.216 CC app/spdk_lspci/spdk_lspci.o 00:03:18.216 CC app/spdk_nvme_identify/identify.o 00:03:18.216 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.216 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.216 TEST_HEADER include/spdk/blobfs.h 00:03:18.216 CC app/trace_record/trace_record.o 00:03:18.216 TEST_HEADER include/spdk/blob.h 00:03:18.216 TEST_HEADER include/spdk/conf.h 00:03:18.216 TEST_HEADER include/spdk/config.h 00:03:18.216 TEST_HEADER include/spdk/cpuset.h 00:03:18.216 CC app/spdk_nvme_perf/perf.o 00:03:18.216 TEST_HEADER include/spdk/crc16.h 00:03:18.216 TEST_HEADER include/spdk/crc32.h 00:03:18.216 TEST_HEADER include/spdk/crc64.h 00:03:18.216 TEST_HEADER include/spdk/dma.h 00:03:18.216 TEST_HEADER include/spdk/dif.h 00:03:18.216 TEST_HEADER include/spdk/endian.h 00:03:18.216 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.217 TEST_HEADER include/spdk/env.h 00:03:18.217 TEST_HEADER include/spdk/event.h 00:03:18.217 TEST_HEADER include/spdk/fd_group.h 00:03:18.217 TEST_HEADER include/spdk/file.h 00:03:18.217 TEST_HEADER include/spdk/fd.h 00:03:18.217 TEST_HEADER include/spdk/ftl.h 00:03:18.217 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.217 TEST_HEADER include/spdk/hexlify.h 00:03:18.217 TEST_HEADER include/spdk/histogram_data.h 00:03:18.217 TEST_HEADER include/spdk/idxd.h 00:03:18.217 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.217 TEST_HEADER include/spdk/init.h 00:03:18.217 TEST_HEADER include/spdk/ioat.h 00:03:18.217 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.217 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.217 TEST_HEADER include/spdk/json.h 00:03:18.217 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.217 TEST_HEADER include/spdk/keyring.h 00:03:18.217 TEST_HEADER include/spdk/keyring_module.h 00:03:18.217 TEST_HEADER include/spdk/likely.h 00:03:18.217 TEST_HEADER include/spdk/lvol.h 00:03:18.217 TEST_HEADER include/spdk/log.h 00:03:18.217 TEST_HEADER include/spdk/memory.h 00:03:18.217 TEST_HEADER include/spdk/mmio.h 00:03:18.217 TEST_HEADER include/spdk/nbd.h 00:03:18.217 TEST_HEADER include/spdk/net.h 00:03:18.217 TEST_HEADER include/spdk/notify.h 00:03:18.217 TEST_HEADER include/spdk/nvme.h 00:03:18.217 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.217 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.217 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.217 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.217 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.217 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.217 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.217 TEST_HEADER include/spdk/nvmf.h 00:03:18.217 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.217 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.217 TEST_HEADER include/spdk/opal.h 00:03:18.217 TEST_HEADER include/spdk/opal_spec.h 00:03:18.217 TEST_HEADER include/spdk/pci_ids.h 00:03:18.217 TEST_HEADER include/spdk/pipe.h 00:03:18.217 TEST_HEADER include/spdk/queue.h 00:03:18.217 TEST_HEADER include/spdk/reduce.h 00:03:18.217 TEST_HEADER include/spdk/rpc.h 00:03:18.217 TEST_HEADER include/spdk/scheduler.h 00:03:18.217 TEST_HEADER include/spdk/scsi.h 00:03:18.217 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.217 TEST_HEADER include/spdk/sock.h 00:03:18.217 TEST_HEADER include/spdk/stdinc.h 00:03:18.217 TEST_HEADER include/spdk/string.h 00:03:18.217 TEST_HEADER include/spdk/trace.h 00:03:18.217 TEST_HEADER include/spdk/thread.h 00:03:18.217 TEST_HEADER include/spdk/trace_parser.h 00:03:18.217 TEST_HEADER include/spdk/tree.h 00:03:18.217 TEST_HEADER include/spdk/ublk.h 00:03:18.217 TEST_HEADER include/spdk/uuid.h 00:03:18.217 TEST_HEADER include/spdk/util.h 00:03:18.217 TEST_HEADER include/spdk/version.h 00:03:18.217 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.217 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.217 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.217 TEST_HEADER include/spdk/vhost.h 00:03:18.217 TEST_HEADER include/spdk/vmd.h 00:03:18.217 TEST_HEADER include/spdk/xor.h 00:03:18.217 TEST_HEADER include/spdk/zipf.h 00:03:18.217 CXX test/cpp_headers/accel.o 00:03:18.217 CXX test/cpp_headers/accel_module.o 00:03:18.217 CXX test/cpp_headers/assert.o 00:03:18.217 CXX test/cpp_headers/barrier.o 00:03:18.217 CXX test/cpp_headers/base64.o 00:03:18.217 CXX test/cpp_headers/bdev.o 00:03:18.217 CXX test/cpp_headers/bdev_module.o 00:03:18.217 CC app/spdk_dd/spdk_dd.o 00:03:18.217 CXX test/cpp_headers/bdev_zone.o 00:03:18.217 CXX test/cpp_headers/bit_array.o 00:03:18.217 CXX test/cpp_headers/bit_pool.o 00:03:18.217 CXX test/cpp_headers/blob_bdev.o 00:03:18.217 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.217 CXX test/cpp_headers/blobfs.o 00:03:18.217 CXX test/cpp_headers/blob.o 00:03:18.217 CXX test/cpp_headers/conf.o 00:03:18.217 CXX test/cpp_headers/config.o 00:03:18.217 CXX test/cpp_headers/cpuset.o 00:03:18.217 CXX test/cpp_headers/crc16.o 00:03:18.218 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.218 CC app/nvmf_tgt/nvmf_main.o 00:03:18.218 CXX test/cpp_headers/crc32.o 00:03:18.218 CC examples/util/zipf/zipf.o 00:03:18.218 CC test/app/histogram_perf/histogram_perf.o 00:03:18.218 CC test/app/jsoncat/jsoncat.o 00:03:18.218 CC test/thread/poller_perf/poller_perf.o 00:03:18.218 CC app/spdk_tgt/spdk_tgt.o 00:03:18.218 CC examples/ioat/verify/verify.o 00:03:18.218 CC test/app/stub/stub.o 00:03:18.218 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.218 CC test/env/vtophys/vtophys.o 00:03:18.218 CC test/env/memory/memory_ut.o 00:03:18.218 CC test/env/pci/pci_ut.o 00:03:18.218 CC examples/ioat/perf/perf.o 00:03:18.218 CC app/fio/nvme/fio_plugin.o 00:03:18.478 CC test/dma/test_dma/test_dma.o 00:03:18.478 CC app/fio/bdev/fio_plugin.o 00:03:18.478 CC test/app/bdev_svc/bdev_svc.o 00:03:18.478 LINK spdk_lspci 00:03:18.478 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.478 CC test/env/mem_callbacks/mem_callbacks.o 00:03:18.478 LINK rpc_client_test 00:03:18.738 LINK spdk_nvme_discover 00:03:18.738 CXX test/cpp_headers/crc64.o 00:03:18.738 LINK histogram_perf 00:03:18.738 LINK jsoncat 00:03:18.738 LINK interrupt_tgt 00:03:18.738 CXX test/cpp_headers/dif.o 00:03:18.738 LINK poller_perf 00:03:18.738 LINK vtophys 00:03:18.738 LINK zipf 00:03:18.738 CXX test/cpp_headers/dma.o 00:03:18.738 CXX test/cpp_headers/endian.o 00:03:18.738 LINK env_dpdk_post_init 00:03:18.738 CXX test/cpp_headers/env_dpdk.o 00:03:18.738 CXX test/cpp_headers/env.o 00:03:18.738 CXX test/cpp_headers/event.o 00:03:18.738 CXX test/cpp_headers/fd_group.o 00:03:18.738 CXX test/cpp_headers/fd.o 00:03:18.738 CXX test/cpp_headers/file.o 00:03:18.738 CXX test/cpp_headers/ftl.o 00:03:18.738 CXX test/cpp_headers/gpt_spec.o 00:03:18.738 CXX test/cpp_headers/hexlify.o 00:03:18.738 LINK nvmf_tgt 00:03:18.738 LINK stub 00:03:18.738 LINK spdk_trace_record 00:03:18.738 LINK iscsi_tgt 00:03:18.738 CXX test/cpp_headers/histogram_data.o 00:03:18.738 CXX test/cpp_headers/idxd.o 00:03:18.738 CXX test/cpp_headers/idxd_spec.o 00:03:18.738 LINK verify 00:03:18.738 LINK spdk_tgt 00:03:18.738 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:18.738 LINK bdev_svc 00:03:18.738 LINK ioat_perf 00:03:18.738 CXX test/cpp_headers/init.o 00:03:18.738 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.000 CXX test/cpp_headers/ioat.o 00:03:19.000 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.000 CXX test/cpp_headers/ioat_spec.o 00:03:19.000 CXX test/cpp_headers/iscsi_spec.o 00:03:19.000 CXX test/cpp_headers/json.o 00:03:19.000 CXX test/cpp_headers/jsonrpc.o 00:03:19.000 LINK spdk_dd 00:03:19.000 CXX test/cpp_headers/keyring.o 00:03:19.000 CXX test/cpp_headers/keyring_module.o 00:03:19.000 CXX test/cpp_headers/likely.o 00:03:19.000 LINK pci_ut 00:03:19.000 CXX test/cpp_headers/log.o 00:03:19.000 CXX test/cpp_headers/lvol.o 00:03:19.000 CXX test/cpp_headers/memory.o 00:03:19.000 CXX test/cpp_headers/mmio.o 00:03:19.000 CXX test/cpp_headers/nbd.o 00:03:19.000 CXX test/cpp_headers/net.o 00:03:19.000 CXX test/cpp_headers/notify.o 00:03:19.000 CXX test/cpp_headers/nvme.o 00:03:19.000 CXX test/cpp_headers/nvme_intel.o 00:03:19.000 LINK spdk_trace 00:03:19.000 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.000 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.265 CXX test/cpp_headers/nvme_spec.o 00:03:19.265 CXX test/cpp_headers/nvme_zns.o 00:03:19.265 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.265 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.265 CXX test/cpp_headers/nvmf.o 00:03:19.265 CXX test/cpp_headers/nvmf_spec.o 00:03:19.265 CXX test/cpp_headers/nvmf_transport.o 00:03:19.265 LINK test_dma 00:03:19.265 CXX test/cpp_headers/opal.o 00:03:19.265 CXX test/cpp_headers/opal_spec.o 00:03:19.265 CC test/event/event_perf/event_perf.o 00:03:19.266 CC test/event/reactor/reactor.o 00:03:19.266 CC test/event/reactor_perf/reactor_perf.o 00:03:19.266 CXX test/cpp_headers/pci_ids.o 00:03:19.266 LINK nvme_fuzz 00:03:19.266 CC examples/sock/hello_world/hello_sock.o 00:03:19.266 CC examples/idxd/perf/perf.o 00:03:19.266 CXX test/cpp_headers/pipe.o 00:03:19.266 CXX test/cpp_headers/queue.o 00:03:19.526 CC examples/vmd/led/led.o 00:03:19.526 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.526 CXX test/cpp_headers/reduce.o 00:03:19.526 LINK spdk_bdev 00:03:19.526 CC examples/thread/thread/thread_ex.o 00:03:19.526 LINK spdk_nvme 00:03:19.526 CXX test/cpp_headers/rpc.o 00:03:19.526 CXX test/cpp_headers/scheduler.o 00:03:19.526 CXX test/cpp_headers/scsi.o 00:03:19.526 CXX test/cpp_headers/scsi_spec.o 00:03:19.526 CXX test/cpp_headers/sock.o 00:03:19.526 CXX test/cpp_headers/stdinc.o 00:03:19.526 CC test/event/app_repeat/app_repeat.o 00:03:19.526 CXX test/cpp_headers/string.o 00:03:19.526 CXX test/cpp_headers/thread.o 00:03:19.526 CXX test/cpp_headers/trace.o 00:03:19.526 CXX test/cpp_headers/trace_parser.o 00:03:19.526 CC test/event/scheduler/scheduler.o 00:03:19.526 CXX test/cpp_headers/tree.o 00:03:19.526 CXX test/cpp_headers/ublk.o 00:03:19.526 CXX test/cpp_headers/util.o 00:03:19.526 CXX test/cpp_headers/uuid.o 00:03:19.526 CXX test/cpp_headers/version.o 00:03:19.526 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.526 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.526 CXX test/cpp_headers/vhost.o 00:03:19.526 CXX test/cpp_headers/vmd.o 00:03:19.526 CXX test/cpp_headers/xor.o 00:03:19.526 LINK reactor 00:03:19.526 CXX test/cpp_headers/zipf.o 00:03:19.526 LINK reactor_perf 00:03:19.787 LINK event_perf 00:03:19.787 LINK lsvmd 00:03:19.787 LINK vhost_fuzz 00:03:19.787 LINK spdk_nvme_perf 00:03:19.787 LINK led 00:03:19.787 LINK mem_callbacks 00:03:19.787 LINK spdk_nvme_identify 00:03:19.787 CC app/vhost/vhost.o 00:03:19.787 LINK app_repeat 00:03:19.787 LINK hello_sock 00:03:19.787 LINK spdk_top 00:03:20.048 CC test/nvme/sgl/sgl.o 00:03:20.049 CC test/nvme/err_injection/err_injection.o 00:03:20.049 CC test/nvme/reset/reset.o 00:03:20.049 CC test/nvme/aer/aer.o 00:03:20.049 CC test/nvme/overhead/overhead.o 00:03:20.049 CC test/nvme/reserve/reserve.o 00:03:20.049 CC test/nvme/startup/startup.o 00:03:20.049 CC test/nvme/e2edp/nvme_dp.o 00:03:20.049 LINK thread 00:03:20.049 CC test/nvme/simple_copy/simple_copy.o 00:03:20.049 CC test/accel/dif/dif.o 00:03:20.049 CC test/blobfs/mkfs/mkfs.o 00:03:20.049 CC test/nvme/connect_stress/connect_stress.o 00:03:20.049 CC test/nvme/boot_partition/boot_partition.o 00:03:20.049 LINK scheduler 00:03:20.049 CC test/nvme/compliance/nvme_compliance.o 00:03:20.049 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.049 CC test/nvme/fdp/fdp.o 00:03:20.049 CC test/nvme/cuse/cuse.o 00:03:20.049 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.049 CC test/lvol/esnap/esnap.o 00:03:20.049 LINK idxd_perf 00:03:20.049 LINK vhost 00:03:20.049 LINK err_injection 00:03:20.307 LINK startup 00:03:20.307 LINK reserve 00:03:20.307 CC examples/nvme/abort/abort.o 00:03:20.307 CC examples/nvme/reconnect/reconnect.o 00:03:20.307 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.307 CC examples/nvme/arbitration/arbitration.o 00:03:20.307 LINK reset 00:03:20.307 CC examples/nvme/hello_world/hello_world.o 00:03:20.307 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.307 LINK simple_copy 00:03:20.307 CC examples/nvme/hotplug/hotplug.o 00:03:20.307 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.307 LINK aer 00:03:20.307 LINK nvme_dp 00:03:20.307 LINK sgl 00:03:20.307 LINK connect_stress 00:03:20.307 LINK boot_partition 00:03:20.307 LINK overhead 00:03:20.307 LINK mkfs 00:03:20.307 LINK fused_ordering 00:03:20.307 LINK memory_ut 00:03:20.307 LINK doorbell_aers 00:03:20.565 CC examples/accel/perf/accel_perf.o 00:03:20.565 CC examples/blob/cli/blobcli.o 00:03:20.565 CC examples/blob/hello_world/hello_blob.o 00:03:20.565 LINK cmb_copy 00:03:20.565 LINK nvme_compliance 00:03:20.565 LINK hello_world 00:03:20.565 LINK dif 00:03:20.565 LINK pmr_persistence 00:03:20.565 LINK fdp 00:03:20.822 LINK reconnect 00:03:20.823 LINK hotplug 00:03:20.823 LINK abort 00:03:20.823 LINK arbitration 00:03:20.823 LINK hello_blob 00:03:20.823 LINK nvme_manage 00:03:21.094 LINK accel_perf 00:03:21.094 LINK blobcli 00:03:21.094 CC test/bdev/bdevio/bdevio.o 00:03:21.094 LINK iscsi_fuzz 00:03:21.358 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.358 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.358 LINK bdevio 00:03:21.615 LINK cuse 00:03:21.615 LINK hello_bdev 00:03:22.182 LINK bdevperf 00:03:22.440 CC examples/nvmf/nvmf/nvmf.o 00:03:23.009 LINK nvmf 00:03:24.907 LINK esnap 00:03:25.473 00:03:25.473 real 0m41.035s 00:03:25.473 user 7m24.718s 00:03:25.473 sys 1m49.057s 00:03:25.473 03:46:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:25.473 03:46:40 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.473 ************************************ 00:03:25.473 END TEST make 00:03:25.473 ************************************ 00:03:25.473 03:46:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.473 03:46:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.473 03:46:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.473 03:46:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.473 03:46:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.473 03:46:40 -- pm/common@44 -- $ pid=594850 00:03:25.473 03:46:40 -- pm/common@50 -- $ kill -TERM 594850 00:03:25.474 03:46:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.474 03:46:40 -- pm/common@44 -- $ pid=594852 00:03:25.474 03:46:40 -- pm/common@50 -- $ kill -TERM 594852 00:03:25.474 03:46:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:25.474 03:46:40 -- pm/common@44 -- $ pid=594854 00:03:25.474 03:46:40 -- pm/common@50 -- $ kill -TERM 594854 00:03:25.474 03:46:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:25.474 03:46:40 -- pm/common@44 -- $ pid=594885 00:03:25.474 03:46:40 -- pm/common@50 -- $ sudo -E kill -TERM 594885 00:03:25.474 03:46:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:25.474 03:46:40 -- nvmf/common.sh@7 -- # uname -s 00:03:25.474 03:46:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.474 03:46:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.474 03:46:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.474 03:46:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.474 03:46:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.474 03:46:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.474 03:46:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.474 03:46:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.474 03:46:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.474 03:46:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.474 03:46:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:25.474 03:46:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:25.474 03:46:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.474 03:46:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.474 03:46:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:25.474 03:46:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.474 03:46:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:25.474 03:46:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.474 03:46:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.474 03:46:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.474 03:46:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.474 03:46:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.474 03:46:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.474 03:46:40 -- paths/export.sh@5 -- # export PATH 00:03:25.474 03:46:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.474 03:46:40 -- nvmf/common.sh@47 -- # : 0 00:03:25.474 03:46:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:25.474 03:46:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:25.474 03:46:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.474 03:46:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.474 03:46:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.474 03:46:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.474 03:46:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.474 03:46:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.474 03:46:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.474 03:46:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.474 03:46:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.474 03:46:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.474 03:46:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.474 03:46:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.474 03:46:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.474 03:46:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.474 03:46:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.474 03:46:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.474 03:46:40 -- spdk/autotest.sh@48 -- # udevadm_pid=666672 00:03:25.474 03:46:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.474 03:46:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.474 03:46:40 -- pm/common@17 -- # local monitor 00:03:25.474 03:46:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@21 -- # date +%s 00:03:25.474 03:46:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.474 03:46:40 -- pm/common@21 -- # date +%s 00:03:25.474 03:46:40 -- pm/common@25 -- # sleep 1 00:03:25.474 03:46:40 -- pm/common@21 -- # date +%s 00:03:25.474 03:46:40 -- pm/common@21 -- # date +%s 00:03:25.474 03:46:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721872000 00:03:25.474 03:46:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721872000 00:03:25.474 03:46:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721872000 00:03:25.474 03:46:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721872000 00:03:25.474 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721872000_collect-vmstat.pm.log 00:03:25.474 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721872000_collect-cpu-load.pm.log 00:03:25.474 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721872000_collect-cpu-temp.pm.log 00:03:25.474 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721872000_collect-bmc-pm.bmc.pm.log 00:03:26.848 03:46:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.848 03:46:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.848 03:46:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.848 03:46:41 -- common/autotest_common.sh@10 -- # set +x 00:03:26.848 03:46:41 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.848 03:46:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:26.848 03:46:41 -- common/autotest_common.sh@10 -- # set +x 00:03:26.848 03:46:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:26.848 03:46:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.848 03:46:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.848 03:46:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:26.848 03:46:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.848 03:46:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.848 03:46:41 -- common/autotest_common.sh@1455 -- # uname 00:03:26.848 03:46:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.848 03:46:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.848 03:46:41 -- common/autotest_common.sh@1475 -- # uname 00:03:26.848 03:46:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.848 03:46:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:26.848 03:46:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:26.848 03:46:41 -- spdk/autotest.sh@72 -- # hash lcov 00:03:26.848 03:46:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:26.848 03:46:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:26.848 --rc lcov_branch_coverage=1 00:03:26.848 --rc lcov_function_coverage=1 00:03:26.848 --rc genhtml_branch_coverage=1 00:03:26.848 --rc genhtml_function_coverage=1 00:03:26.848 --rc genhtml_legend=1 00:03:26.848 --rc geninfo_all_blocks=1 00:03:26.848 ' 00:03:26.848 03:46:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:26.848 --rc lcov_branch_coverage=1 00:03:26.848 --rc lcov_function_coverage=1 00:03:26.848 --rc genhtml_branch_coverage=1 00:03:26.848 --rc genhtml_function_coverage=1 00:03:26.848 --rc genhtml_legend=1 00:03:26.848 --rc geninfo_all_blocks=1 00:03:26.848 ' 00:03:26.848 03:46:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:26.848 --rc lcov_branch_coverage=1 00:03:26.848 --rc lcov_function_coverage=1 00:03:26.848 --rc genhtml_branch_coverage=1 00:03:26.848 --rc genhtml_function_coverage=1 00:03:26.848 --rc genhtml_legend=1 00:03:26.848 --rc geninfo_all_blocks=1 00:03:26.848 --no-external' 00:03:26.848 03:46:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:26.848 --rc lcov_branch_coverage=1 00:03:26.848 --rc lcov_function_coverage=1 00:03:26.848 --rc genhtml_branch_coverage=1 00:03:26.848 --rc genhtml_function_coverage=1 00:03:26.848 --rc genhtml_legend=1 00:03:26.848 --rc geninfo_all_blocks=1 00:03:26.848 --no-external' 00:03:26.848 03:46:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:26.848 lcov: LCOV version 1.14 00:03:26.848 03:46:41 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:56.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:56.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:56.653 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:59.954 03:47:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:59.954 03:47:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.954 03:47:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.954 03:47:15 -- spdk/autotest.sh@91 -- # rm -f 00:03:59.954 03:47:15 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.392 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:01.392 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.392 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.392 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.392 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.392 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.392 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.392 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.392 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.392 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.392 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.392 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.392 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.392 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.392 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.392 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.392 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.392 03:47:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:01.392 03:47:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:01.392 03:47:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:01.392 03:47:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:01.392 03:47:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.392 03:47:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:01.392 03:47:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:01.392 03:47:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.392 03:47:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.392 03:47:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:01.392 03:47:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.392 03:47:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:01.392 03:47:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:01.392 03:47:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:01.392 03:47:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.392 No valid GPT data, bailing 00:04:01.392 03:47:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.392 03:47:16 -- scripts/common.sh@391 -- # pt= 00:04:01.392 03:47:16 -- scripts/common.sh@392 -- # return 1 00:04:01.392 03:47:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.392 1+0 records in 00:04:01.392 1+0 records out 00:04:01.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00243751 s, 430 MB/s 00:04:01.392 03:47:16 -- spdk/autotest.sh@118 -- # sync 00:04:01.392 03:47:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.392 03:47:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.392 03:47:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.291 03:47:18 -- spdk/autotest.sh@124 -- # uname -s 00:04:03.291 03:47:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:03.291 03:47:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:03.291 03:47:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.291 03:47:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.291 03:47:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.291 ************************************ 00:04:03.291 START TEST setup.sh 00:04:03.291 ************************************ 00:04:03.291 03:47:18 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:03.291 * Looking for test storage... 00:04:03.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:03.292 03:47:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:03.292 03:47:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:03.292 03:47:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:03.292 03:47:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.292 03:47:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.292 03:47:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.292 ************************************ 00:04:03.292 START TEST acl 00:04:03.292 ************************************ 00:04:03.292 03:47:18 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:03.550 * Looking for test storage... 00:04:03.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.550 03:47:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:03.550 03:47:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:03.550 03:47:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.550 03:47:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.924 03:47:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:04.924 03:47:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:04.924 03:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.924 03:47:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:04.924 03:47:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.924 03:47:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:06.297 Hugepages 00:04:06.297 node hugesize free / total 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 00:04:06.297 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.297 03:47:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:06.298 03:47:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.298 03:47:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.298 03:47:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.298 03:47:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:06.298 03:47:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:06.298 03:47:21 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.298 03:47:21 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.298 03:47:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.298 ************************************ 00:04:06.298 START TEST denied 00:04:06.298 ************************************ 00:04:06.298 03:47:21 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:06.298 03:47:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:06.298 03:47:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:06.298 03:47:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:06.298 03:47:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.298 03:47:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.682 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.683 03:47:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.209 00:04:10.209 real 0m3.727s 00:04:10.209 user 0m1.066s 00:04:10.209 sys 0m1.756s 00:04:10.209 03:47:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.209 03:47:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:10.209 ************************************ 00:04:10.209 END TEST denied 00:04:10.209 ************************************ 00:04:10.209 03:47:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:10.209 03:47:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.209 03:47:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.209 03:47:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:10.209 ************************************ 00:04:10.209 START TEST allowed 00:04:10.209 ************************************ 00:04:10.209 03:47:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:10.209 03:47:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:10.209 03:47:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:10.209 03:47:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:10.209 03:47:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.209 03:47:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.109 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.109 03:47:27 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:12.109 03:47:27 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:12.109 03:47:27 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:12.109 03:47:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.109 03:47:27 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.010 00:04:14.010 real 0m3.802s 00:04:14.010 user 0m0.973s 00:04:14.010 sys 0m1.640s 00:04:14.010 03:47:28 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.010 03:47:28 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 END TEST allowed 00:04:14.010 ************************************ 00:04:14.010 00:04:14.010 real 0m10.383s 00:04:14.010 user 0m3.232s 00:04:14.010 sys 0m5.129s 00:04:14.010 03:47:28 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.010 03:47:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 END TEST acl 00:04:14.010 ************************************ 00:04:14.010 03:47:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.010 03:47:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.010 03:47:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.010 03:47:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 START TEST hugepages 00:04:14.010 ************************************ 00:04:14.010 03:47:29 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.010 * Looking for test storage... 00:04:14.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42123952 kB' 'MemAvailable: 45628124 kB' 'Buffers: 2704 kB' 'Cached: 11868300 kB' 'SwapCached: 0 kB' 'Active: 8843144 kB' 'Inactive: 3502164 kB' 'Active(anon): 8446896 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477612 kB' 'Mapped: 186084 kB' 'Shmem: 7972592 kB' 'KReclaimable: 200208 kB' 'Slab: 579700 kB' 'SReclaimable: 200208 kB' 'SUnreclaim: 379492 kB' 'KernelStack: 12816 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 9567820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.010 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.011 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.012 03:47:29 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:14.012 03:47:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.012 03:47:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.012 03:47:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.012 ************************************ 00:04:14.012 START TEST default_setup 00:04:14.012 ************************************ 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.012 03:47:29 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.386 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:15.386 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:15.386 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:16.324 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44226576 kB' 'MemAvailable: 47731352 kB' 'Buffers: 2704 kB' 'Cached: 11868388 kB' 'SwapCached: 0 kB' 'Active: 8859736 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463488 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494084 kB' 'Mapped: 186184 kB' 'Shmem: 7972680 kB' 'KReclaimable: 200404 kB' 'Slab: 579396 kB' 'SReclaimable: 200404 kB' 'SUnreclaim: 378992 kB' 'KernelStack: 12672 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9584996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.324 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.325 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44227472 kB' 'MemAvailable: 47732244 kB' 'Buffers: 2704 kB' 'Cached: 11868392 kB' 'SwapCached: 0 kB' 'Active: 8860228 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463980 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494620 kB' 'Mapped: 186260 kB' 'Shmem: 7972684 kB' 'KReclaimable: 200396 kB' 'Slab: 579448 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379052 kB' 'KernelStack: 12736 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.326 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.327 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44227900 kB' 'MemAvailable: 47732672 kB' 'Buffers: 2704 kB' 'Cached: 11868408 kB' 'SwapCached: 0 kB' 'Active: 8860116 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463868 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494460 kB' 'Mapped: 186152 kB' 'Shmem: 7972700 kB' 'KReclaimable: 200396 kB' 'Slab: 579456 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379060 kB' 'KernelStack: 12736 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.328 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.329 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:16.330 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.330 nr_hugepages=1024 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.590 resv_hugepages=0 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.590 surplus_hugepages=0 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.590 anon_hugepages=0 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44227900 kB' 'MemAvailable: 47732672 kB' 'Buffers: 2704 kB' 'Cached: 11868432 kB' 'SwapCached: 0 kB' 'Active: 8860108 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463860 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494416 kB' 'Mapped: 186152 kB' 'Shmem: 7972724 kB' 'KReclaimable: 200396 kB' 'Slab: 579456 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379060 kB' 'KernelStack: 12720 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.590 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.591 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18884120 kB' 'MemUsed: 13992820 kB' 'SwapCached: 0 kB' 'Active: 7589292 kB' 'Inactive: 3254244 kB' 'Active(anon): 7376484 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10485816 kB' 'Mapped: 123460 kB' 'AnonPages: 360860 kB' 'Shmem: 7018764 kB' 'KernelStack: 7912 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370616 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 238980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.592 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.593 node0=1024 expecting 1024 00:04:16.593 03:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.593 00:04:16.593 real 0m2.532s 00:04:16.593 user 0m0.672s 00:04:16.593 sys 0m0.930s 00:04:16.594 03:47:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.594 03:47:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:16.594 ************************************ 00:04:16.594 END TEST default_setup 00:04:16.594 ************************************ 00:04:16.594 03:47:31 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.594 03:47:31 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.594 03:47:31 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.594 03:47:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.594 ************************************ 00:04:16.594 START TEST per_node_1G_alloc 00:04:16.594 ************************************ 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.594 03:47:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.528 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.528 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.528 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.528 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.528 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.528 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.528 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.528 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.528 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.528 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.528 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.528 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.528 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.528 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.528 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.528 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.528 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44213772 kB' 'MemAvailable: 47718544 kB' 'Buffers: 2704 kB' 'Cached: 11868500 kB' 'SwapCached: 0 kB' 'Active: 8860556 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464308 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494672 kB' 'Mapped: 186192 kB' 'Shmem: 7972792 kB' 'KReclaimable: 200396 kB' 'Slab: 579296 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378900 kB' 'KernelStack: 12720 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.791 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.792 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44221348 kB' 'MemAvailable: 47726120 kB' 'Buffers: 2704 kB' 'Cached: 11868504 kB' 'SwapCached: 0 kB' 'Active: 8860528 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464280 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494704 kB' 'Mapped: 186236 kB' 'Shmem: 7972796 kB' 'KReclaimable: 200396 kB' 'Slab: 579320 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378924 kB' 'KernelStack: 12752 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.793 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.794 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44219800 kB' 'MemAvailable: 47724572 kB' 'Buffers: 2704 kB' 'Cached: 11868520 kB' 'SwapCached: 0 kB' 'Active: 8860412 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464164 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494548 kB' 'Mapped: 186160 kB' 'Shmem: 7972812 kB' 'KReclaimable: 200396 kB' 'Slab: 579336 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378940 kB' 'KernelStack: 12752 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.795 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.796 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.797 nr_hugepages=1024 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.797 resv_hugepages=0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.797 surplus_hugepages=0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.797 anon_hugepages=0 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44219800 kB' 'MemAvailable: 47724572 kB' 'Buffers: 2704 kB' 'Cached: 11868544 kB' 'SwapCached: 0 kB' 'Active: 8860444 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464196 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494544 kB' 'Mapped: 186160 kB' 'Shmem: 7972836 kB' 'KReclaimable: 200396 kB' 'Slab: 579336 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378940 kB' 'KernelStack: 12752 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.797 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.798 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.799 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19921744 kB' 'MemUsed: 12955196 kB' 'SwapCached: 0 kB' 'Active: 7589268 kB' 'Inactive: 3254244 kB' 'Active(anon): 7376460 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10485820 kB' 'Mapped: 123456 kB' 'AnonPages: 360816 kB' 'Shmem: 7018768 kB' 'KernelStack: 7944 kB' 'PageTables: 5620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370644 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 239008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.060 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.061 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24297300 kB' 'MemUsed: 3367488 kB' 'SwapCached: 0 kB' 'Active: 1271176 kB' 'Inactive: 247920 kB' 'Active(anon): 1087736 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 247920 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1385468 kB' 'Mapped: 62704 kB' 'AnonPages: 133696 kB' 'Shmem: 954108 kB' 'KernelStack: 4792 kB' 'PageTables: 2360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68760 kB' 'Slab: 208692 kB' 'SReclaimable: 68760 kB' 'SUnreclaim: 139932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.062 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.063 node0=512 expecting 512 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.063 node1=512 expecting 512 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.063 00:04:18.063 real 0m1.416s 00:04:18.063 user 0m0.575s 00:04:18.063 sys 0m0.802s 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.063 03:47:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.063 ************************************ 00:04:18.063 END TEST per_node_1G_alloc 00:04:18.063 ************************************ 00:04:18.063 03:47:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:18.063 03:47:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.063 03:47:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.063 03:47:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.063 ************************************ 00:04:18.063 START TEST even_2G_alloc 00:04:18.063 ************************************ 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.063 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.064 03:47:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.047 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.047 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44217168 kB' 'MemAvailable: 47721940 kB' 'Buffers: 2704 kB' 'Cached: 11868644 kB' 'SwapCached: 0 kB' 'Active: 8860272 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464024 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494284 kB' 'Mapped: 186248 kB' 'Shmem: 7972936 kB' 'KReclaimable: 200396 kB' 'Slab: 579196 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378800 kB' 'KernelStack: 12720 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44222692 kB' 'MemAvailable: 47727464 kB' 'Buffers: 2704 kB' 'Cached: 11868644 kB' 'SwapCached: 0 kB' 'Active: 8860596 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464348 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494644 kB' 'Mapped: 186248 kB' 'Shmem: 7972936 kB' 'KReclaimable: 200396 kB' 'Slab: 579180 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378784 kB' 'KernelStack: 12768 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.312 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.313 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44221940 kB' 'MemAvailable: 47726712 kB' 'Buffers: 2704 kB' 'Cached: 11868644 kB' 'SwapCached: 0 kB' 'Active: 8860624 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464376 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494636 kB' 'Mapped: 186172 kB' 'Shmem: 7972936 kB' 'KReclaimable: 200396 kB' 'Slab: 579212 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378816 kB' 'KernelStack: 12768 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.314 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.315 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.316 nr_hugepages=1024 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.316 resv_hugepages=0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.316 surplus_hugepages=0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.316 anon_hugepages=0 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44221940 kB' 'MemAvailable: 47726712 kB' 'Buffers: 2704 kB' 'Cached: 11868684 kB' 'SwapCached: 0 kB' 'Active: 8861036 kB' 'Inactive: 3502164 kB' 'Active(anon): 8464788 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495032 kB' 'Mapped: 186172 kB' 'Shmem: 7972976 kB' 'KReclaimable: 200396 kB' 'Slab: 579204 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378808 kB' 'KernelStack: 12800 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9585572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.316 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.317 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19924736 kB' 'MemUsed: 12952204 kB' 'SwapCached: 0 kB' 'Active: 7589856 kB' 'Inactive: 3254244 kB' 'Active(anon): 7377048 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10485896 kB' 'Mapped: 123456 kB' 'AnonPages: 361356 kB' 'Shmem: 7018844 kB' 'KernelStack: 7992 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370664 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 239028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.318 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.319 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24296460 kB' 'MemUsed: 3368328 kB' 'SwapCached: 0 kB' 'Active: 1270896 kB' 'Inactive: 247920 kB' 'Active(anon): 1087456 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 247920 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1385516 kB' 'Mapped: 62716 kB' 'AnonPages: 133356 kB' 'Shmem: 954156 kB' 'KernelStack: 4792 kB' 'PageTables: 2312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68760 kB' 'Slab: 208540 kB' 'SReclaimable: 68760 kB' 'SUnreclaim: 139780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.320 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.321 node0=512 expecting 512 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:19.321 node1=512 expecting 512 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:19.321 00:04:19.321 real 0m1.343s 00:04:19.321 user 0m0.540s 00:04:19.321 sys 0m0.766s 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.321 03:47:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.321 ************************************ 00:04:19.321 END TEST even_2G_alloc 00:04:19.321 ************************************ 00:04:19.321 03:47:34 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:19.321 03:47:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.321 03:47:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.321 03:47:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.321 ************************************ 00:04:19.321 START TEST odd_alloc 00:04:19.321 ************************************ 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:19.321 03:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.322 03:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.697 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:20.697 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.697 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:20.697 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:20.697 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:20.697 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:20.697 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:20.697 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:20.697 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:20.697 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:20.697 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:20.697 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:20.698 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:20.698 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:20.698 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:20.698 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:20.698 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44227876 kB' 'MemAvailable: 47732648 kB' 'Buffers: 2704 kB' 'Cached: 11868772 kB' 'SwapCached: 0 kB' 'Active: 8858036 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461788 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491572 kB' 'Mapped: 185376 kB' 'Shmem: 7973064 kB' 'KReclaimable: 200396 kB' 'Slab: 579060 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378664 kB' 'KernelStack: 12832 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 9573500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.698 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44228948 kB' 'MemAvailable: 47733720 kB' 'Buffers: 2704 kB' 'Cached: 11868776 kB' 'SwapCached: 0 kB' 'Active: 8858280 kB' 'Inactive: 3502164 kB' 'Active(anon): 8462032 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492164 kB' 'Mapped: 185360 kB' 'Shmem: 7973068 kB' 'KReclaimable: 200396 kB' 'Slab: 579056 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378660 kB' 'KernelStack: 13056 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 9574376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.699 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.700 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44228152 kB' 'MemAvailable: 47732924 kB' 'Buffers: 2704 kB' 'Cached: 11868788 kB' 'SwapCached: 0 kB' 'Active: 8859388 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463140 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493308 kB' 'Mapped: 185412 kB' 'Shmem: 7973080 kB' 'KReclaimable: 200396 kB' 'Slab: 579088 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378692 kB' 'KernelStack: 13072 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 9573176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.701 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.702 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.703 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.965 nr_hugepages=1025 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.965 resv_hugepages=0 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.965 surplus_hugepages=0 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.965 anon_hugepages=0 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44226648 kB' 'MemAvailable: 47731420 kB' 'Buffers: 2704 kB' 'Cached: 11868808 kB' 'SwapCached: 0 kB' 'Active: 8859868 kB' 'Inactive: 3502164 kB' 'Active(anon): 8463620 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493756 kB' 'Mapped: 185352 kB' 'Shmem: 7973100 kB' 'KReclaimable: 200396 kB' 'Slab: 579080 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378684 kB' 'KernelStack: 13008 kB' 'PageTables: 10084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 9574552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.965 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.966 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19953248 kB' 'MemUsed: 12923692 kB' 'SwapCached: 0 kB' 'Active: 7586720 kB' 'Inactive: 3254244 kB' 'Active(anon): 7373912 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10485996 kB' 'Mapped: 122688 kB' 'AnonPages: 358160 kB' 'Shmem: 7018944 kB' 'KernelStack: 7912 kB' 'PageTables: 5280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370596 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 238960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.967 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24274284 kB' 'MemUsed: 3390504 kB' 'SwapCached: 0 kB' 'Active: 1270904 kB' 'Inactive: 247920 kB' 'Active(anon): 1087464 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 247920 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1385524 kB' 'Mapped: 62672 kB' 'AnonPages: 133364 kB' 'Shmem: 954164 kB' 'KernelStack: 4872 kB' 'PageTables: 2156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68760 kB' 'Slab: 208476 kB' 'SReclaimable: 68760 kB' 'SUnreclaim: 139716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.968 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.969 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:20.970 node0=512 expecting 513 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:20.970 node1=513 expecting 512 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:20.970 00:04:20.970 real 0m1.499s 00:04:20.970 user 0m0.626s 00:04:20.970 sys 0m0.838s 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.970 03:47:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.970 ************************************ 00:04:20.970 END TEST odd_alloc 00:04:20.970 ************************************ 00:04:20.970 03:47:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.970 03:47:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.970 03:47:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.970 03:47:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.970 ************************************ 00:04:20.970 START TEST custom_alloc 00:04:20.970 ************************************ 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.970 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.971 03:47:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.905 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.905 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.905 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.905 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.905 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.905 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.905 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.905 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.905 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.905 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.905 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.905 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.905 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.905 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.905 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.905 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.905 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.167 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43173580 kB' 'MemAvailable: 46678352 kB' 'Buffers: 2704 kB' 'Cached: 11868904 kB' 'SwapCached: 0 kB' 'Active: 8857476 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461228 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491232 kB' 'Mapped: 185504 kB' 'Shmem: 7973196 kB' 'KReclaimable: 200396 kB' 'Slab: 579492 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379096 kB' 'KernelStack: 12704 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 9572396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.168 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43177940 kB' 'MemAvailable: 46682712 kB' 'Buffers: 2704 kB' 'Cached: 11868904 kB' 'SwapCached: 0 kB' 'Active: 8858160 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461912 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491960 kB' 'Mapped: 185440 kB' 'Shmem: 7973196 kB' 'KReclaimable: 200396 kB' 'Slab: 579492 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379096 kB' 'KernelStack: 12800 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 9572416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.169 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.170 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43177716 kB' 'MemAvailable: 46682488 kB' 'Buffers: 2704 kB' 'Cached: 11868920 kB' 'SwapCached: 0 kB' 'Active: 8857536 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461288 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491396 kB' 'Mapped: 185440 kB' 'Shmem: 7973212 kB' 'KReclaimable: 200396 kB' 'Slab: 579476 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379080 kB' 'KernelStack: 12752 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 9572436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196404 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.171 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:22.173 nr_hugepages=1536 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.173 resv_hugepages=0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.173 surplus_hugepages=0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.173 anon_hugepages=0 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.173 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43177632 kB' 'MemAvailable: 46682404 kB' 'Buffers: 2704 kB' 'Cached: 11868944 kB' 'SwapCached: 0 kB' 'Active: 8857368 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461120 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491104 kB' 'Mapped: 185364 kB' 'Shmem: 7973236 kB' 'KReclaimable: 200396 kB' 'Slab: 579500 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 379104 kB' 'KernelStack: 12704 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 9572456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.174 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.175 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19956660 kB' 'MemUsed: 12920280 kB' 'SwapCached: 0 kB' 'Active: 7586940 kB' 'Inactive: 3254244 kB' 'Active(anon): 7374132 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10486140 kB' 'Mapped: 122684 kB' 'AnonPages: 358200 kB' 'Shmem: 7019088 kB' 'KernelStack: 7944 kB' 'PageTables: 5328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370864 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 239228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.176 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.177 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23222716 kB' 'MemUsed: 4442072 kB' 'SwapCached: 0 kB' 'Active: 1270864 kB' 'Inactive: 247920 kB' 'Active(anon): 1087424 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 247920 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1385532 kB' 'Mapped: 62680 kB' 'AnonPages: 133436 kB' 'Shmem: 954172 kB' 'KernelStack: 4856 kB' 'PageTables: 2520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68760 kB' 'Slab: 208636 kB' 'SReclaimable: 68760 kB' 'SUnreclaim: 139876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.437 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.438 node0=512 expecting 512 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:22.438 node1=1024 expecting 1024 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:22.438 00:04:22.438 real 0m1.377s 00:04:22.438 user 0m0.586s 00:04:22.438 sys 0m0.751s 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.438 03:47:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.438 ************************************ 00:04:22.438 END TEST custom_alloc 00:04:22.438 ************************************ 00:04:22.438 03:47:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:22.438 03:47:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.438 03:47:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.438 03:47:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.438 ************************************ 00:04:22.438 START TEST no_shrink_alloc 00:04:22.438 ************************************ 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:22.438 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.439 03:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.372 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.372 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.372 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.372 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.372 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.372 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.372 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.372 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.372 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.372 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.372 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.372 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.372 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.372 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.372 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.372 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.372 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.635 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44193284 kB' 'MemAvailable: 47698056 kB' 'Buffers: 2704 kB' 'Cached: 11869036 kB' 'SwapCached: 0 kB' 'Active: 8858288 kB' 'Inactive: 3502164 kB' 'Active(anon): 8462040 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491868 kB' 'Mapped: 185404 kB' 'Shmem: 7973328 kB' 'KReclaimable: 200396 kB' 'Slab: 579336 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378940 kB' 'KernelStack: 12768 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9572864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.636 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44193568 kB' 'MemAvailable: 47698340 kB' 'Buffers: 2704 kB' 'Cached: 11869036 kB' 'SwapCached: 0 kB' 'Active: 8858196 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461948 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491760 kB' 'Mapped: 185380 kB' 'Shmem: 7973328 kB' 'KReclaimable: 200396 kB' 'Slab: 579308 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378912 kB' 'KernelStack: 12800 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9572880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.637 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.638 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44193984 kB' 'MemAvailable: 47698756 kB' 'Buffers: 2704 kB' 'Cached: 11869056 kB' 'SwapCached: 0 kB' 'Active: 8858056 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461808 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491632 kB' 'Mapped: 185380 kB' 'Shmem: 7973348 kB' 'KReclaimable: 200396 kB' 'Slab: 579328 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378932 kB' 'KernelStack: 12784 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9572904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.639 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.640 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.641 nr_hugepages=1024 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.641 resv_hugepages=0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.641 surplus_hugepages=0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.641 anon_hugepages=0 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.641 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44194368 kB' 'MemAvailable: 47699140 kB' 'Buffers: 2704 kB' 'Cached: 11869076 kB' 'SwapCached: 0 kB' 'Active: 8858044 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461796 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491600 kB' 'Mapped: 185380 kB' 'Shmem: 7973368 kB' 'KReclaimable: 200396 kB' 'Slab: 579328 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378932 kB' 'KernelStack: 12768 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9572924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.642 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.643 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18914100 kB' 'MemUsed: 13962840 kB' 'SwapCached: 0 kB' 'Active: 7586564 kB' 'Inactive: 3254244 kB' 'Active(anon): 7373756 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10486196 kB' 'Mapped: 122688 kB' 'AnonPages: 357692 kB' 'Shmem: 7019144 kB' 'KernelStack: 7944 kB' 'PageTables: 5276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370716 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 239080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.903 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.904 node0=1024 expecting 1024 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.904 03:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.839 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.839 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.839 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.839 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.839 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.839 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.839 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.839 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.839 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.839 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.839 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.839 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.839 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.839 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.839 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.839 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.839 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.102 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44187724 kB' 'MemAvailable: 47692496 kB' 'Buffers: 2704 kB' 'Cached: 11869144 kB' 'SwapCached: 0 kB' 'Active: 8858500 kB' 'Inactive: 3502164 kB' 'Active(anon): 8462252 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492204 kB' 'Mapped: 185492 kB' 'Shmem: 7973436 kB' 'KReclaimable: 200396 kB' 'Slab: 579292 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378896 kB' 'KernelStack: 12832 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9573104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.102 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.103 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44190480 kB' 'MemAvailable: 47695252 kB' 'Buffers: 2704 kB' 'Cached: 11869144 kB' 'SwapCached: 0 kB' 'Active: 8858408 kB' 'Inactive: 3502164 kB' 'Active(anon): 8462160 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492092 kB' 'Mapped: 185464 kB' 'Shmem: 7973436 kB' 'KReclaimable: 200396 kB' 'Slab: 579264 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378868 kB' 'KernelStack: 12800 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9573120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.104 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.105 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44190532 kB' 'MemAvailable: 47695304 kB' 'Buffers: 2704 kB' 'Cached: 11869168 kB' 'SwapCached: 0 kB' 'Active: 8858232 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461984 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491852 kB' 'Mapped: 185388 kB' 'Shmem: 7973460 kB' 'KReclaimable: 200396 kB' 'Slab: 579244 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378848 kB' 'KernelStack: 12800 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9573144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.106 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.107 nr_hugepages=1024 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.107 resv_hugepages=0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.107 surplus_hugepages=0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.107 anon_hugepages=0 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.107 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44190412 kB' 'MemAvailable: 47695184 kB' 'Buffers: 2704 kB' 'Cached: 11869196 kB' 'SwapCached: 0 kB' 'Active: 8858244 kB' 'Inactive: 3502164 kB' 'Active(anon): 8461996 kB' 'Inactive(anon): 0 kB' 'Active(file): 396248 kB' 'Inactive(file): 3502164 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491852 kB' 'Mapped: 185388 kB' 'Shmem: 7973488 kB' 'KReclaimable: 200396 kB' 'Slab: 579244 kB' 'SReclaimable: 200396 kB' 'SUnreclaim: 378848 kB' 'KernelStack: 12800 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 9573164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2166364 kB' 'DirectMap2M: 16627712 kB' 'DirectMap1G: 50331648 kB' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.108 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.109 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18896348 kB' 'MemUsed: 13980592 kB' 'SwapCached: 0 kB' 'Active: 7586816 kB' 'Inactive: 3254244 kB' 'Active(anon): 7374008 kB' 'Inactive(anon): 0 kB' 'Active(file): 212808 kB' 'Inactive(file): 3254244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10486204 kB' 'Mapped: 122688 kB' 'AnonPages: 357992 kB' 'Shmem: 7019152 kB' 'KernelStack: 7992 kB' 'PageTables: 5328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131636 kB' 'Slab: 370580 kB' 'SReclaimable: 131636 kB' 'SUnreclaim: 238944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.110 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.111 node0=1024 expecting 1024 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.111 00:04:25.111 real 0m2.793s 00:04:25.111 user 0m1.201s 00:04:25.111 sys 0m1.515s 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.111 03:47:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.111 ************************************ 00:04:25.111 END TEST no_shrink_alloc 00:04:25.111 ************************************ 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.111 03:47:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.111 00:04:25.111 real 0m11.343s 00:04:25.111 user 0m4.372s 00:04:25.111 sys 0m5.835s 00:04:25.111 03:47:40 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.111 03:47:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.111 ************************************ 00:04:25.111 END TEST hugepages 00:04:25.111 ************************************ 00:04:25.111 03:47:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.111 03:47:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.111 03:47:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.111 03:47:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.111 ************************************ 00:04:25.111 START TEST driver 00:04:25.111 ************************************ 00:04:25.111 03:47:40 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.370 * Looking for test storage... 00:04:25.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.370 03:47:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.370 03:47:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.370 03:47:40 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.900 03:47:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:27.900 03:47:42 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.900 03:47:42 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.900 03:47:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.900 ************************************ 00:04:27.900 START TEST guess_driver 00:04:27.900 ************************************ 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:27.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:27.900 Looking for driver=vfio-pci 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.900 03:47:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.832 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.090 03:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.023 03:47:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.550 00:04:32.550 real 0m4.874s 00:04:32.550 user 0m1.141s 00:04:32.550 sys 0m1.861s 00:04:32.550 03:47:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.550 03:47:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.550 ************************************ 00:04:32.550 END TEST guess_driver 00:04:32.550 ************************************ 00:04:32.550 00:04:32.550 real 0m7.380s 00:04:32.550 user 0m1.687s 00:04:32.550 sys 0m2.829s 00:04:32.550 03:47:47 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.550 03:47:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.550 ************************************ 00:04:32.550 END TEST driver 00:04:32.550 ************************************ 00:04:32.550 03:47:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.550 03:47:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.550 03:47:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.550 03:47:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.550 ************************************ 00:04:32.550 START TEST devices 00:04:32.550 ************************************ 00:04:32.550 03:47:47 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.807 * Looking for test storage... 00:04:32.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.807 03:47:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.807 03:47:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.807 03:47:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.807 03:47:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.182 03:47:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:34.182 03:47:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:34.182 03:47:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:34.182 03:47:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:34.182 No valid GPT data, bailing 00:04:34.182 03:47:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.182 03:47:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:34.183 03:47:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:34.183 03:47:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:34.183 03:47:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:34.183 03:47:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:34.183 03:47:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:34.183 03:47:49 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.183 03:47:49 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.183 03:47:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.183 ************************************ 00:04:34.183 START TEST nvme_mount 00:04:34.183 ************************************ 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:34.183 03:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:35.164 Creating new GPT entries in memory. 00:04:35.164 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.164 other utilities. 00:04:35.164 03:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.164 03:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.164 03:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.164 03:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.164 03:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.099 Creating new GPT entries in memory. 00:04:36.099 The operation has completed successfully. 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 686726 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.099 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.357 03:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.291 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.549 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:37.549 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.550 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.550 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.550 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.550 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.550 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.550 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.808 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.808 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.808 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.808 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.808 03:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.742 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.743 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.002 03:47:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.377 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.377 00:04:40.377 real 0m6.276s 00:04:40.377 user 0m1.446s 00:04:40.377 sys 0m2.392s 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.377 03:47:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:40.377 ************************************ 00:04:40.377 END TEST nvme_mount 00:04:40.377 ************************************ 00:04:40.377 03:47:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.377 03:47:55 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.377 03:47:55 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.377 03:47:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.377 ************************************ 00:04:40.377 START TEST dm_mount 00:04:40.377 ************************************ 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.377 03:47:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.751 Creating new GPT entries in memory. 00:04:41.751 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.751 other utilities. 00:04:41.751 03:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.751 03:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.751 03:47:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.751 03:47:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.751 03:47:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.684 Creating new GPT entries in memory. 00:04:42.684 The operation has completed successfully. 00:04:42.684 03:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.684 03:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.684 03:47:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.684 03:47:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.684 03:47:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.624 The operation has completed successfully. 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 689127 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.624 03:47:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.995 03:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.995 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.996 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.996 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:44.996 03:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.996 03:48:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.996 03:48:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.929 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.930 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.188 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.188 00:04:46.188 real 0m5.715s 00:04:46.188 user 0m0.994s 00:04:46.188 sys 0m1.582s 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.188 03:48:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.188 ************************************ 00:04:46.188 END TEST dm_mount 00:04:46.188 ************************************ 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.188 03:48:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.446 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.446 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.446 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.446 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.446 03:48:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:46.446 00:04:46.446 real 0m13.853s 00:04:46.446 user 0m3.083s 00:04:46.446 sys 0m4.952s 00:04:46.446 03:48:01 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.446 03:48:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.446 ************************************ 00:04:46.446 END TEST devices 00:04:46.446 ************************************ 00:04:46.446 00:04:46.446 real 0m43.187s 00:04:46.446 user 0m12.478s 00:04:46.446 sys 0m18.886s 00:04:46.446 03:48:01 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.446 03:48:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.446 ************************************ 00:04:46.446 END TEST setup.sh 00:04:46.446 ************************************ 00:04:46.446 03:48:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:47.818 Hugepages 00:04:47.818 node hugesize free / total 00:04:47.818 node0 1048576kB 0 / 0 00:04:47.818 node0 2048kB 2048 / 2048 00:04:47.818 node1 1048576kB 0 / 0 00:04:47.818 node1 2048kB 0 / 0 00:04:47.818 00:04:47.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.818 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:47.818 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:47.818 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:47.818 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:47.818 03:48:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.818 03:48:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.818 03:48:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.818 03:48:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.191 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.191 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.191 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.127 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:50.127 03:48:05 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:51.061 03:48:06 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:51.061 03:48:06 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:51.061 03:48:06 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.061 03:48:06 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:51.061 03:48:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:51.061 03:48:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:51.061 03:48:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.061 03:48:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:51.061 03:48:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:51.061 03:48:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:51.061 03:48:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:51.061 03:48:06 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.439 Waiting for block devices as requested 00:04:52.439 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:52.439 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:52.439 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:52.733 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:52.733 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:52.733 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:52.733 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:52.991 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:52.991 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:52.991 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:52.991 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.249 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:53.249 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:53.249 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:53.249 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:53.508 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:53.508 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:53.508 03:48:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:53.508 03:48:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:53.508 03:48:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:53.508 03:48:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:53.508 03:48:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:53.508 03:48:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:53.508 03:48:08 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:53.508 03:48:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:53.508 03:48:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:53.508 03:48:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:53.508 03:48:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:53.508 03:48:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:53.508 03:48:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:53.508 03:48:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:53.508 03:48:08 -- common/autotest_common.sh@1557 -- # continue 00:04:53.508 03:48:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:53.508 03:48:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.508 03:48:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.766 03:48:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:53.766 03:48:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.766 03:48:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.766 03:48:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.700 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.700 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.958 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.958 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.891 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.891 03:48:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:55.891 03:48:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.891 03:48:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.891 03:48:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:55.891 03:48:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:55.891 03:48:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.891 03:48:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:55.892 03:48:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:55.892 03:48:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:55.892 03:48:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:55.892 03:48:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:55.892 03:48:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.892 03:48:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.892 03:48:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:56.150 03:48:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:56.150 03:48:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:56.150 03:48:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:56.150 03:48:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:56.150 03:48:11 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:56.150 03:48:11 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:56.150 03:48:11 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:56.150 03:48:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:56.150 03:48:11 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:56.150 03:48:11 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=694922 00:04:56.150 03:48:11 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.150 03:48:11 -- common/autotest_common.sh@1598 -- # waitforlisten 694922 00:04:56.150 03:48:11 -- common/autotest_common.sh@831 -- # '[' -z 694922 ']' 00:04:56.150 03:48:11 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.150 03:48:11 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.150 03:48:11 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.150 03:48:11 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.150 03:48:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.150 [2024-07-25 03:48:11.296945] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:04:56.150 [2024-07-25 03:48:11.297044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694922 ] 00:04:56.150 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.150 [2024-07-25 03:48:11.333660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.150 [2024-07-25 03:48:11.363992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.408 [2024-07-25 03:48:11.454023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.666 03:48:11 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.666 03:48:11 -- common/autotest_common.sh@864 -- # return 0 00:04:56.666 03:48:11 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:56.666 03:48:11 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:56.666 03:48:11 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:59.945 nvme0n1 00:04:59.945 03:48:14 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:59.945 [2024-07-25 03:48:15.023333] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:59.945 [2024-07-25 03:48:15.023380] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:59.945 request: 00:04:59.945 { 00:04:59.945 "nvme_ctrlr_name": "nvme0", 00:04:59.946 "password": "test", 00:04:59.946 "method": "bdev_nvme_opal_revert", 00:04:59.946 "req_id": 1 00:04:59.946 } 00:04:59.946 Got JSON-RPC error response 00:04:59.946 response: 00:04:59.946 { 00:04:59.946 "code": -32603, 00:04:59.946 "message": "Internal error" 00:04:59.946 } 00:04:59.946 03:48:15 -- common/autotest_common.sh@1604 -- # true 00:04:59.946 03:48:15 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:59.946 03:48:15 -- common/autotest_common.sh@1608 -- # killprocess 694922 00:04:59.946 03:48:15 -- common/autotest_common.sh@950 -- # '[' -z 694922 ']' 00:04:59.946 03:48:15 -- common/autotest_common.sh@954 -- # kill -0 694922 00:04:59.946 03:48:15 -- common/autotest_common.sh@955 -- # uname 00:04:59.946 03:48:15 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.946 03:48:15 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694922 00:04:59.946 03:48:15 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.946 03:48:15 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.946 03:48:15 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694922' 00:04:59.946 killing process with pid 694922 00:04:59.946 03:48:15 -- common/autotest_common.sh@969 -- # kill 694922 00:04:59.946 03:48:15 -- common/autotest_common.sh@974 -- # wait 694922 00:05:01.843 03:48:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:01.843 03:48:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:01.843 03:48:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:01.843 03:48:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:01.843 03:48:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:01.843 03:48:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.843 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 03:48:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:01.843 03:48:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:01.843 03:48:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.843 03:48:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.843 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 ************************************ 00:05:01.843 START TEST env 00:05:01.843 ************************************ 00:05:01.843 03:48:16 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:01.843 * Looking for test storage... 00:05:01.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:01.843 03:48:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:01.843 03:48:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.843 03:48:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.843 03:48:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 ************************************ 00:05:01.843 START TEST env_memory 00:05:01.843 ************************************ 00:05:01.843 03:48:16 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:01.843 00:05:01.843 00:05:01.843 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.843 http://cunit.sourceforge.net/ 00:05:01.843 00:05:01.843 00:05:01.843 Suite: memory 00:05:01.843 Test: alloc and free memory map ...[2024-07-25 03:48:16.941632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:01.843 passed 00:05:01.843 Test: mem map translation ...[2024-07-25 03:48:16.961450] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:01.843 [2024-07-25 03:48:16.961472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:01.843 [2024-07-25 03:48:16.961524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:01.843 [2024-07-25 03:48:16.961537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:01.843 passed 00:05:01.843 Test: mem map registration ...[2024-07-25 03:48:17.002002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:01.843 [2024-07-25 03:48:17.002022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:01.843 passed 00:05:01.843 Test: mem map adjacent registrations ...passed 00:05:01.843 00:05:01.843 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.843 suites 1 1 n/a 0 0 00:05:01.843 tests 4 4 4 0 0 00:05:01.843 asserts 152 152 152 0 n/a 00:05:01.843 00:05:01.843 Elapsed time = 0.140 seconds 00:05:01.843 00:05:01.843 real 0m0.149s 00:05:01.843 user 0m0.138s 00:05:01.843 sys 0m0.010s 00:05:01.843 03:48:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.843 03:48:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 ************************************ 00:05:01.843 END TEST env_memory 00:05:01.843 ************************************ 00:05:01.843 03:48:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:01.843 03:48:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.843 03:48:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.843 03:48:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 ************************************ 00:05:01.843 START TEST env_vtophys 00:05:01.843 ************************************ 00:05:01.844 03:48:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:01.844 EAL: lib.eal log level changed from notice to debug 00:05:01.844 EAL: Detected lcore 0 as core 0 on socket 0 00:05:01.844 EAL: Detected lcore 1 as core 1 on socket 0 00:05:01.844 EAL: Detected lcore 2 as core 2 on socket 0 00:05:01.844 EAL: Detected lcore 3 as core 3 on socket 0 00:05:01.844 EAL: Detected lcore 4 as core 4 on socket 0 00:05:01.844 EAL: Detected lcore 5 as core 5 on socket 0 00:05:01.844 EAL: Detected lcore 6 as core 8 on socket 0 00:05:01.844 EAL: Detected lcore 7 as core 9 on socket 0 00:05:01.844 EAL: Detected lcore 8 as core 10 on socket 0 00:05:01.844 EAL: Detected lcore 9 as core 11 on socket 0 00:05:01.844 EAL: Detected lcore 10 as core 12 on socket 0 00:05:01.844 EAL: Detected lcore 11 as core 13 on socket 0 00:05:01.844 EAL: Detected lcore 12 as core 0 on socket 1 00:05:01.844 EAL: Detected lcore 13 as core 1 on socket 1 00:05:01.844 EAL: Detected lcore 14 as core 2 on socket 1 00:05:01.844 EAL: Detected lcore 15 as core 3 on socket 1 00:05:01.844 EAL: Detected lcore 16 as core 4 on socket 1 00:05:01.844 EAL: Detected lcore 17 as core 5 on socket 1 00:05:01.844 EAL: Detected lcore 18 as core 8 on socket 1 00:05:01.844 EAL: Detected lcore 19 as core 9 on socket 1 00:05:01.844 EAL: Detected lcore 20 as core 10 on socket 1 00:05:01.844 EAL: Detected lcore 21 as core 11 on socket 1 00:05:01.844 EAL: Detected lcore 22 as core 12 on socket 1 00:05:01.844 EAL: Detected lcore 23 as core 13 on socket 1 00:05:01.844 EAL: Detected lcore 24 as core 0 on socket 0 00:05:01.844 EAL: Detected lcore 25 as core 1 on socket 0 00:05:01.844 EAL: Detected lcore 26 as core 2 on socket 0 00:05:01.844 EAL: Detected lcore 27 as core 3 on socket 0 00:05:01.844 EAL: Detected lcore 28 as core 4 on socket 0 00:05:01.844 EAL: Detected lcore 29 as core 5 on socket 0 00:05:01.844 EAL: Detected lcore 30 as core 8 on socket 0 00:05:01.844 EAL: Detected lcore 31 as core 9 on socket 0 00:05:01.844 EAL: Detected lcore 32 as core 10 on socket 0 00:05:01.844 EAL: Detected lcore 33 as core 11 on socket 0 00:05:01.844 EAL: Detected lcore 34 as core 12 on socket 0 00:05:01.844 EAL: Detected lcore 35 as core 13 on socket 0 00:05:01.844 EAL: Detected lcore 36 as core 0 on socket 1 00:05:01.844 EAL: Detected lcore 37 as core 1 on socket 1 00:05:01.844 EAL: Detected lcore 38 as core 2 on socket 1 00:05:01.844 EAL: Detected lcore 39 as core 3 on socket 1 00:05:01.844 EAL: Detected lcore 40 as core 4 on socket 1 00:05:01.844 EAL: Detected lcore 41 as core 5 on socket 1 00:05:01.844 EAL: Detected lcore 42 as core 8 on socket 1 00:05:01.844 EAL: Detected lcore 43 as core 9 on socket 1 00:05:01.844 EAL: Detected lcore 44 as core 10 on socket 1 00:05:01.844 EAL: Detected lcore 45 as core 11 on socket 1 00:05:01.844 EAL: Detected lcore 46 as core 12 on socket 1 00:05:01.844 EAL: Detected lcore 47 as core 13 on socket 1 00:05:01.844 EAL: Maximum logical cores by configuration: 128 00:05:01.844 EAL: Detected CPU lcores: 48 00:05:01.844 EAL: Detected NUMA nodes: 2 00:05:01.844 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:01.844 EAL: Detected shared linkage of DPDK 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:05:01.844 EAL: Registered [vdev] bus. 00:05:01.844 EAL: bus.vdev log level changed from disabled to notice 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:05:01.844 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:01.844 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:05:01.844 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:05:01.844 EAL: No shared files mode enabled, IPC will be disabled 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Bus pci wants IOVA as 'DC' 00:05:02.102 EAL: Bus vdev wants IOVA as 'DC' 00:05:02.102 EAL: Buses did not request a specific IOVA mode. 00:05:02.102 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:02.102 EAL: Selected IOVA mode 'VA' 00:05:02.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.102 EAL: Probing VFIO support... 00:05:02.102 EAL: IOMMU type 1 (Type 1) is supported 00:05:02.102 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:02.102 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:02.102 EAL: VFIO support initialized 00:05:02.102 EAL: Ask a virtual area of 0x2e000 bytes 00:05:02.102 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:02.102 EAL: Setting up physically contiguous memory... 00:05:02.102 EAL: Setting maximum number of open files to 524288 00:05:02.102 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:02.102 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:02.102 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:02.102 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:02.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.102 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:02.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.102 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:02.102 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:02.102 EAL: Hugepages will be freed exactly as allocated. 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: TSC frequency is ~2700000 KHz 00:05:02.102 EAL: Main lcore 0 is ready (tid=7ff9e7684a00;cpuset=[0]) 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 0 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 2MB 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Mem event callback 'spdk:(nil)' registered 00:05:02.102 00:05:02.102 00:05:02.102 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.102 http://cunit.sourceforge.net/ 00:05:02.102 00:05:02.102 00:05:02.102 Suite: components_suite 00:05:02.102 Test: vtophys_malloc_test ...passed 00:05:02.102 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 4MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 4MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 6MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 6MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 10MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 10MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 18MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 18MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 34MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 34MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 66MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 66MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 130MB 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was shrunk by 130MB 00:05:02.102 EAL: Trying to obtain current memory policy. 00:05:02.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.102 EAL: Restoring previous memory policy: 4 00:05:02.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.102 EAL: request: mp_malloc_sync 00:05:02.102 EAL: No shared files mode enabled, IPC is disabled 00:05:02.102 EAL: Heap on socket 0 was expanded by 258MB 00:05:02.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.360 EAL: request: mp_malloc_sync 00:05:02.360 EAL: No shared files mode enabled, IPC is disabled 00:05:02.360 EAL: Heap on socket 0 was shrunk by 258MB 00:05:02.360 EAL: Trying to obtain current memory policy. 00:05:02.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.360 EAL: Restoring previous memory policy: 4 00:05:02.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.360 EAL: request: mp_malloc_sync 00:05:02.360 EAL: No shared files mode enabled, IPC is disabled 00:05:02.360 EAL: Heap on socket 0 was expanded by 514MB 00:05:02.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.617 EAL: request: mp_malloc_sync 00:05:02.617 EAL: No shared files mode enabled, IPC is disabled 00:05:02.617 EAL: Heap on socket 0 was shrunk by 514MB 00:05:02.617 EAL: Trying to obtain current memory policy. 00:05:02.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.874 EAL: Restoring previous memory policy: 4 00:05:02.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.874 EAL: request: mp_malloc_sync 00:05:02.874 EAL: No shared files mode enabled, IPC is disabled 00:05:02.874 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.388 EAL: request: mp_malloc_sync 00:05:03.388 EAL: No shared files mode enabled, IPC is disabled 00:05:03.388 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:03.388 passed 00:05:03.388 00:05:03.388 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.388 suites 1 1 n/a 0 0 00:05:03.388 tests 2 2 2 0 0 00:05:03.388 asserts 497 497 497 0 n/a 00:05:03.388 00:05:03.388 Elapsed time = 1.369 seconds 00:05:03.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.388 EAL: request: mp_malloc_sync 00:05:03.388 EAL: No shared files mode enabled, IPC is disabled 00:05:03.388 EAL: Heap on socket 0 was shrunk by 2MB 00:05:03.388 EAL: No shared files mode enabled, IPC is disabled 00:05:03.388 EAL: No shared files mode enabled, IPC is disabled 00:05:03.388 EAL: No shared files mode enabled, IPC is disabled 00:05:03.388 00:05:03.388 real 0m1.484s 00:05:03.388 user 0m0.849s 00:05:03.388 sys 0m0.603s 00:05:03.388 03:48:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.388 03:48:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:03.388 ************************************ 00:05:03.388 END TEST env_vtophys 00:05:03.388 ************************************ 00:05:03.388 03:48:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:03.388 03:48:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.388 03:48:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.388 03:48:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.388 ************************************ 00:05:03.388 START TEST env_pci 00:05:03.388 ************************************ 00:05:03.388 03:48:18 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:03.388 00:05:03.388 00:05:03.388 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.388 http://cunit.sourceforge.net/ 00:05:03.388 00:05:03.388 00:05:03.388 Suite: pci 00:05:03.388 Test: pci_hook ...[2024-07-25 03:48:18.644211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 695808 has claimed it 00:05:03.388 EAL: Cannot find device (10000:00:01.0) 00:05:03.388 EAL: Failed to attach device on primary process 00:05:03.388 passed 00:05:03.388 00:05:03.388 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.388 suites 1 1 n/a 0 0 00:05:03.388 tests 1 1 1 0 0 00:05:03.388 asserts 25 25 25 0 n/a 00:05:03.388 00:05:03.388 Elapsed time = 0.021 seconds 00:05:03.388 00:05:03.388 real 0m0.033s 00:05:03.388 user 0m0.011s 00:05:03.388 sys 0m0.022s 00:05:03.388 03:48:18 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.388 03:48:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:03.388 ************************************ 00:05:03.388 END TEST env_pci 00:05:03.388 ************************************ 00:05:03.388 03:48:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:03.388 03:48:18 env -- env/env.sh@15 -- # uname 00:05:03.388 03:48:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:03.646 03:48:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:03.646 03:48:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:03.646 03:48:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:03.646 03:48:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.646 03:48:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 ************************************ 00:05:03.646 START TEST env_dpdk_post_init 00:05:03.646 ************************************ 00:05:03.646 03:48:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:03.646 EAL: Detected CPU lcores: 48 00:05:03.646 EAL: Detected NUMA nodes: 2 00:05:03.646 EAL: Detected shared linkage of DPDK 00:05:03.646 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:03.646 EAL: Selected IOVA mode 'VA' 00:05:03.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.646 EAL: VFIO support initialized 00:05:03.646 EAL: Using IOMMU type 1 (Type 1) 00:05:07.824 Starting DPDK initialization... 00:05:07.824 Starting SPDK post initialization... 00:05:07.824 SPDK NVMe probe 00:05:07.824 Attaching to 0000:88:00.0 00:05:07.824 Attached to 0000:88:00.0 00:05:07.824 Cleaning up... 00:05:07.824 00:05:07.824 real 0m4.384s 00:05:07.824 user 0m3.261s 00:05:07.824 sys 0m0.179s 00:05:07.824 03:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.824 03:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.824 ************************************ 00:05:07.824 END TEST env_dpdk_post_init 00:05:07.824 ************************************ 00:05:07.824 03:48:23 env -- env/env.sh@26 -- # uname 00:05:07.824 03:48:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.824 03:48:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.824 03:48:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.824 03:48:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.824 03:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.083 ************************************ 00:05:08.083 START TEST env_mem_callbacks 00:05:08.083 ************************************ 00:05:08.083 03:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.083 EAL: Detected CPU lcores: 48 00:05:08.083 EAL: Detected NUMA nodes: 2 00:05:08.083 EAL: Detected shared linkage of DPDK 00:05:08.083 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.083 EAL: Selected IOVA mode 'VA' 00:05:08.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.083 EAL: VFIO support initialized 00:05:08.083 00:05:08.083 00:05:08.083 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.083 http://cunit.sourceforge.net/ 00:05:08.083 00:05:08.083 00:05:08.083 Suite: memory 00:05:08.083 Test: test ... 00:05:08.083 register 0x200000200000 2097152 00:05:08.083 malloc 3145728 00:05:08.083 register 0x200000400000 4194304 00:05:08.083 buf 0x200000500000 len 3145728 PASSED 00:05:08.083 malloc 64 00:05:08.083 buf 0x2000004fff40 len 64 PASSED 00:05:08.083 malloc 4194304 00:05:08.083 register 0x200000800000 6291456 00:05:08.083 buf 0x200000a00000 len 4194304 PASSED 00:05:08.083 free 0x200000500000 3145728 00:05:08.083 free 0x2000004fff40 64 00:05:08.083 unregister 0x200000400000 4194304 PASSED 00:05:08.083 free 0x200000a00000 4194304 00:05:08.083 unregister 0x200000800000 6291456 PASSED 00:05:08.083 malloc 8388608 00:05:08.083 register 0x200000400000 10485760 00:05:08.083 buf 0x200000600000 len 8388608 PASSED 00:05:08.083 free 0x200000600000 8388608 00:05:08.083 unregister 0x200000400000 10485760 PASSED 00:05:08.083 passed 00:05:08.083 00:05:08.083 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.083 suites 1 1 n/a 0 0 00:05:08.083 tests 1 1 1 0 0 00:05:08.083 asserts 15 15 15 0 n/a 00:05:08.083 00:05:08.083 Elapsed time = 0.005 seconds 00:05:08.083 00:05:08.083 real 0m0.047s 00:05:08.083 user 0m0.011s 00:05:08.083 sys 0m0.036s 00:05:08.083 03:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.083 03:48:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.083 ************************************ 00:05:08.083 END TEST env_mem_callbacks 00:05:08.083 ************************************ 00:05:08.083 00:05:08.083 real 0m6.377s 00:05:08.083 user 0m4.372s 00:05:08.083 sys 0m1.047s 00:05:08.083 03:48:23 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.083 03:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.083 ************************************ 00:05:08.083 END TEST env 00:05:08.083 ************************************ 00:05:08.083 03:48:23 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.083 03:48:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.083 03:48:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.083 03:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:08.083 ************************************ 00:05:08.083 START TEST rpc 00:05:08.083 ************************************ 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.083 * Looking for test storage... 00:05:08.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.083 03:48:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=696470 00:05:08.083 03:48:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:08.083 03:48:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.083 03:48:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 696470 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 696470 ']' 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.083 03:48:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.083 [2024-07-25 03:48:23.349625] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:08.083 [2024-07-25 03:48:23.349725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696470 ] 00:05:08.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.083 [2024-07-25 03:48:23.382402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.340 [2024-07-25 03:48:23.410529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.340 [2024-07-25 03:48:23.495850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.340 [2024-07-25 03:48:23.495906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 696470' to capture a snapshot of events at runtime. 00:05:08.340 [2024-07-25 03:48:23.495934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:08.341 [2024-07-25 03:48:23.495946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:08.341 [2024-07-25 03:48:23.495955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid696470 for offline analysis/debug. 00:05:08.341 [2024-07-25 03:48:23.495988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.599 03:48:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.599 03:48:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:08.599 03:48:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.599 03:48:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.599 03:48:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:08.599 03:48:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:08.599 03:48:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.599 03:48:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.599 03:48:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 ************************************ 00:05:08.599 START TEST rpc_integrity 00:05:08.599 ************************************ 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.599 { 00:05:08.599 "name": "Malloc0", 00:05:08.599 "aliases": [ 00:05:08.599 "f8d3fcd8-97f3-4dc8-92e6-36f2c5aa1e48" 00:05:08.599 ], 00:05:08.599 "product_name": "Malloc disk", 00:05:08.599 "block_size": 512, 00:05:08.599 "num_blocks": 16384, 00:05:08.599 "uuid": "f8d3fcd8-97f3-4dc8-92e6-36f2c5aa1e48", 00:05:08.599 "assigned_rate_limits": { 00:05:08.599 "rw_ios_per_sec": 0, 00:05:08.599 "rw_mbytes_per_sec": 0, 00:05:08.599 "r_mbytes_per_sec": 0, 00:05:08.599 "w_mbytes_per_sec": 0 00:05:08.599 }, 00:05:08.599 "claimed": false, 00:05:08.599 "zoned": false, 00:05:08.599 "supported_io_types": { 00:05:08.599 "read": true, 00:05:08.599 "write": true, 00:05:08.599 "unmap": true, 00:05:08.599 "flush": true, 00:05:08.599 "reset": true, 00:05:08.599 "nvme_admin": false, 00:05:08.599 "nvme_io": false, 00:05:08.599 "nvme_io_md": false, 00:05:08.599 "write_zeroes": true, 00:05:08.599 "zcopy": true, 00:05:08.599 "get_zone_info": false, 00:05:08.599 "zone_management": false, 00:05:08.599 "zone_append": false, 00:05:08.599 "compare": false, 00:05:08.599 "compare_and_write": false, 00:05:08.599 "abort": true, 00:05:08.599 "seek_hole": false, 00:05:08.599 "seek_data": false, 00:05:08.599 "copy": true, 00:05:08.599 "nvme_iov_md": false 00:05:08.599 }, 00:05:08.599 "memory_domains": [ 00:05:08.599 { 00:05:08.599 "dma_device_id": "system", 00:05:08.599 "dma_device_type": 1 00:05:08.599 }, 00:05:08.599 { 00:05:08.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.599 "dma_device_type": 2 00:05:08.599 } 00:05:08.599 ], 00:05:08.599 "driver_specific": {} 00:05:08.599 } 00:05:08.599 ]' 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 [2024-07-25 03:48:23.875572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:08.599 [2024-07-25 03:48:23.875617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.599 [2024-07-25 03:48:23.875643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdac7f0 00:05:08.599 [2024-07-25 03:48:23.875658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.599 [2024-07-25 03:48:23.877157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.599 [2024-07-25 03:48:23.877185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.599 Passthru0 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.599 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.599 { 00:05:08.599 "name": "Malloc0", 00:05:08.599 "aliases": [ 00:05:08.599 "f8d3fcd8-97f3-4dc8-92e6-36f2c5aa1e48" 00:05:08.599 ], 00:05:08.599 "product_name": "Malloc disk", 00:05:08.599 "block_size": 512, 00:05:08.599 "num_blocks": 16384, 00:05:08.599 "uuid": "f8d3fcd8-97f3-4dc8-92e6-36f2c5aa1e48", 00:05:08.599 "assigned_rate_limits": { 00:05:08.599 "rw_ios_per_sec": 0, 00:05:08.599 "rw_mbytes_per_sec": 0, 00:05:08.599 "r_mbytes_per_sec": 0, 00:05:08.599 "w_mbytes_per_sec": 0 00:05:08.599 }, 00:05:08.599 "claimed": true, 00:05:08.599 "claim_type": "exclusive_write", 00:05:08.599 "zoned": false, 00:05:08.599 "supported_io_types": { 00:05:08.599 "read": true, 00:05:08.599 "write": true, 00:05:08.599 "unmap": true, 00:05:08.599 "flush": true, 00:05:08.599 "reset": true, 00:05:08.599 "nvme_admin": false, 00:05:08.599 "nvme_io": false, 00:05:08.599 "nvme_io_md": false, 00:05:08.599 "write_zeroes": true, 00:05:08.599 "zcopy": true, 00:05:08.599 "get_zone_info": false, 00:05:08.599 "zone_management": false, 00:05:08.599 "zone_append": false, 00:05:08.599 "compare": false, 00:05:08.599 "compare_and_write": false, 00:05:08.599 "abort": true, 00:05:08.599 "seek_hole": false, 00:05:08.599 "seek_data": false, 00:05:08.599 "copy": true, 00:05:08.599 "nvme_iov_md": false 00:05:08.599 }, 00:05:08.599 "memory_domains": [ 00:05:08.599 { 00:05:08.599 "dma_device_id": "system", 00:05:08.599 "dma_device_type": 1 00:05:08.599 }, 00:05:08.599 { 00:05:08.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.599 "dma_device_type": 2 00:05:08.599 } 00:05:08.599 ], 00:05:08.599 "driver_specific": {} 00:05:08.599 }, 00:05:08.599 { 00:05:08.599 "name": "Passthru0", 00:05:08.599 "aliases": [ 00:05:08.599 "5e988788-de97-50ff-b070-82139b8effd8" 00:05:08.599 ], 00:05:08.599 "product_name": "passthru", 00:05:08.599 "block_size": 512, 00:05:08.599 "num_blocks": 16384, 00:05:08.599 "uuid": "5e988788-de97-50ff-b070-82139b8effd8", 00:05:08.599 "assigned_rate_limits": { 00:05:08.599 "rw_ios_per_sec": 0, 00:05:08.599 "rw_mbytes_per_sec": 0, 00:05:08.599 "r_mbytes_per_sec": 0, 00:05:08.599 "w_mbytes_per_sec": 0 00:05:08.599 }, 00:05:08.599 "claimed": false, 00:05:08.599 "zoned": false, 00:05:08.599 "supported_io_types": { 00:05:08.599 "read": true, 00:05:08.599 "write": true, 00:05:08.599 "unmap": true, 00:05:08.599 "flush": true, 00:05:08.599 "reset": true, 00:05:08.599 "nvme_admin": false, 00:05:08.599 "nvme_io": false, 00:05:08.599 "nvme_io_md": false, 00:05:08.599 "write_zeroes": true, 00:05:08.599 "zcopy": true, 00:05:08.599 "get_zone_info": false, 00:05:08.599 "zone_management": false, 00:05:08.599 "zone_append": false, 00:05:08.599 "compare": false, 00:05:08.599 "compare_and_write": false, 00:05:08.599 "abort": true, 00:05:08.599 "seek_hole": false, 00:05:08.599 "seek_data": false, 00:05:08.599 "copy": true, 00:05:08.599 "nvme_iov_md": false 00:05:08.600 }, 00:05:08.600 "memory_domains": [ 00:05:08.600 { 00:05:08.600 "dma_device_id": "system", 00:05:08.600 "dma_device_type": 1 00:05:08.600 }, 00:05:08.600 { 00:05:08.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.600 "dma_device_type": 2 00:05:08.600 } 00:05:08.600 ], 00:05:08.600 "driver_specific": { 00:05:08.600 "passthru": { 00:05:08.600 "name": "Passthru0", 00:05:08.600 "base_bdev_name": "Malloc0" 00:05:08.600 } 00:05:08.600 } 00:05:08.600 } 00:05:08.600 ]' 00:05:08.600 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.858 03:48:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.858 00:05:08.858 real 0m0.224s 00:05:08.858 user 0m0.150s 00:05:08.858 sys 0m0.018s 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.858 03:48:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 ************************************ 00:05:08.858 END TEST rpc_integrity 00:05:08.858 ************************************ 00:05:08.858 03:48:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:08.858 03:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.858 03:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.858 03:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 ************************************ 00:05:08.858 START TEST rpc_plugins 00:05:08.858 ************************************ 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:08.858 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.858 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:08.858 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.858 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.858 { 00:05:08.858 "name": "Malloc1", 00:05:08.858 "aliases": [ 00:05:08.858 "b96e0e11-1703-4c06-8028-2d59a514d7bb" 00:05:08.858 ], 00:05:08.858 "product_name": "Malloc disk", 00:05:08.858 "block_size": 4096, 00:05:08.858 "num_blocks": 256, 00:05:08.858 "uuid": "b96e0e11-1703-4c06-8028-2d59a514d7bb", 00:05:08.858 "assigned_rate_limits": { 00:05:08.858 "rw_ios_per_sec": 0, 00:05:08.858 "rw_mbytes_per_sec": 0, 00:05:08.858 "r_mbytes_per_sec": 0, 00:05:08.858 "w_mbytes_per_sec": 0 00:05:08.858 }, 00:05:08.858 "claimed": false, 00:05:08.858 "zoned": false, 00:05:08.858 "supported_io_types": { 00:05:08.858 "read": true, 00:05:08.858 "write": true, 00:05:08.858 "unmap": true, 00:05:08.858 "flush": true, 00:05:08.858 "reset": true, 00:05:08.858 "nvme_admin": false, 00:05:08.858 "nvme_io": false, 00:05:08.858 "nvme_io_md": false, 00:05:08.858 "write_zeroes": true, 00:05:08.858 "zcopy": true, 00:05:08.858 "get_zone_info": false, 00:05:08.858 "zone_management": false, 00:05:08.858 "zone_append": false, 00:05:08.858 "compare": false, 00:05:08.858 "compare_and_write": false, 00:05:08.858 "abort": true, 00:05:08.858 "seek_hole": false, 00:05:08.858 "seek_data": false, 00:05:08.858 "copy": true, 00:05:08.858 "nvme_iov_md": false 00:05:08.858 }, 00:05:08.858 "memory_domains": [ 00:05:08.858 { 00:05:08.858 "dma_device_id": "system", 00:05:08.858 "dma_device_type": 1 00:05:08.858 }, 00:05:08.858 { 00:05:08.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.858 "dma_device_type": 2 00:05:08.858 } 00:05:08.858 ], 00:05:08.859 "driver_specific": {} 00:05:08.859 } 00:05:08.859 ]' 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.859 03:48:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.859 00:05:08.859 real 0m0.115s 00:05:08.859 user 0m0.078s 00:05:08.859 sys 0m0.008s 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.859 03:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.859 ************************************ 00:05:08.859 END TEST rpc_plugins 00:05:08.859 ************************************ 00:05:09.117 03:48:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.117 03:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.117 03:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.117 03:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.117 ************************************ 00:05:09.117 START TEST rpc_trace_cmd_test 00:05:09.117 ************************************ 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.117 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid696470", 00:05:09.117 "tpoint_group_mask": "0x8", 00:05:09.117 "iscsi_conn": { 00:05:09.117 "mask": "0x2", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "scsi": { 00:05:09.117 "mask": "0x4", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "bdev": { 00:05:09.117 "mask": "0x8", 00:05:09.117 "tpoint_mask": "0xffffffffffffffff" 00:05:09.117 }, 00:05:09.117 "nvmf_rdma": { 00:05:09.117 "mask": "0x10", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "nvmf_tcp": { 00:05:09.117 "mask": "0x20", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "ftl": { 00:05:09.117 "mask": "0x40", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "blobfs": { 00:05:09.117 "mask": "0x80", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "dsa": { 00:05:09.117 "mask": "0x200", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "thread": { 00:05:09.117 "mask": "0x400", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "nvme_pcie": { 00:05:09.117 "mask": "0x800", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "iaa": { 00:05:09.117 "mask": "0x1000", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "nvme_tcp": { 00:05:09.117 "mask": "0x2000", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "bdev_nvme": { 00:05:09.117 "mask": "0x4000", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 }, 00:05:09.117 "sock": { 00:05:09.117 "mask": "0x8000", 00:05:09.117 "tpoint_mask": "0x0" 00:05:09.117 } 00:05:09.117 }' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.117 00:05:09.117 real 0m0.200s 00:05:09.117 user 0m0.175s 00:05:09.117 sys 0m0.018s 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.117 03:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.117 ************************************ 00:05:09.117 END TEST rpc_trace_cmd_test 00:05:09.117 ************************************ 00:05:09.375 03:48:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.375 03:48:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.375 03:48:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.375 03:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.375 03:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.375 03:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.375 ************************************ 00:05:09.375 START TEST rpc_daemon_integrity 00:05:09.375 ************************************ 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.375 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.375 { 00:05:09.375 "name": "Malloc2", 00:05:09.375 "aliases": [ 00:05:09.375 "7305b585-72a4-41c0-8fd7-a57eed15ac51" 00:05:09.375 ], 00:05:09.375 "product_name": "Malloc disk", 00:05:09.375 "block_size": 512, 00:05:09.375 "num_blocks": 16384, 00:05:09.376 "uuid": "7305b585-72a4-41c0-8fd7-a57eed15ac51", 00:05:09.376 "assigned_rate_limits": { 00:05:09.376 "rw_ios_per_sec": 0, 00:05:09.376 "rw_mbytes_per_sec": 0, 00:05:09.376 "r_mbytes_per_sec": 0, 00:05:09.376 "w_mbytes_per_sec": 0 00:05:09.376 }, 00:05:09.376 "claimed": false, 00:05:09.376 "zoned": false, 00:05:09.376 "supported_io_types": { 00:05:09.376 "read": true, 00:05:09.376 "write": true, 00:05:09.376 "unmap": true, 00:05:09.376 "flush": true, 00:05:09.376 "reset": true, 00:05:09.376 "nvme_admin": false, 00:05:09.376 "nvme_io": false, 00:05:09.376 "nvme_io_md": false, 00:05:09.376 "write_zeroes": true, 00:05:09.376 "zcopy": true, 00:05:09.376 "get_zone_info": false, 00:05:09.376 "zone_management": false, 00:05:09.376 "zone_append": false, 00:05:09.376 "compare": false, 00:05:09.376 "compare_and_write": false, 00:05:09.376 "abort": true, 00:05:09.376 "seek_hole": false, 00:05:09.376 "seek_data": false, 00:05:09.376 "copy": true, 00:05:09.376 "nvme_iov_md": false 00:05:09.376 }, 00:05:09.376 "memory_domains": [ 00:05:09.376 { 00:05:09.376 "dma_device_id": "system", 00:05:09.376 "dma_device_type": 1 00:05:09.376 }, 00:05:09.376 { 00:05:09.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.376 "dma_device_type": 2 00:05:09.376 } 00:05:09.376 ], 00:05:09.376 "driver_specific": {} 00:05:09.376 } 00:05:09.376 ]' 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 [2024-07-25 03:48:24.549542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.376 [2024-07-25 03:48:24.549586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.376 [2024-07-25 03:48:24.549611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf50490 00:05:09.376 [2024-07-25 03:48:24.549626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.376 [2024-07-25 03:48:24.550946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.376 [2024-07-25 03:48:24.550974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.376 Passthru0 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.376 { 00:05:09.376 "name": "Malloc2", 00:05:09.376 "aliases": [ 00:05:09.376 "7305b585-72a4-41c0-8fd7-a57eed15ac51" 00:05:09.376 ], 00:05:09.376 "product_name": "Malloc disk", 00:05:09.376 "block_size": 512, 00:05:09.376 "num_blocks": 16384, 00:05:09.376 "uuid": "7305b585-72a4-41c0-8fd7-a57eed15ac51", 00:05:09.376 "assigned_rate_limits": { 00:05:09.376 "rw_ios_per_sec": 0, 00:05:09.376 "rw_mbytes_per_sec": 0, 00:05:09.376 "r_mbytes_per_sec": 0, 00:05:09.376 "w_mbytes_per_sec": 0 00:05:09.376 }, 00:05:09.376 "claimed": true, 00:05:09.376 "claim_type": "exclusive_write", 00:05:09.376 "zoned": false, 00:05:09.376 "supported_io_types": { 00:05:09.376 "read": true, 00:05:09.376 "write": true, 00:05:09.376 "unmap": true, 00:05:09.376 "flush": true, 00:05:09.376 "reset": true, 00:05:09.376 "nvme_admin": false, 00:05:09.376 "nvme_io": false, 00:05:09.376 "nvme_io_md": false, 00:05:09.376 "write_zeroes": true, 00:05:09.376 "zcopy": true, 00:05:09.376 "get_zone_info": false, 00:05:09.376 "zone_management": false, 00:05:09.376 "zone_append": false, 00:05:09.376 "compare": false, 00:05:09.376 "compare_and_write": false, 00:05:09.376 "abort": true, 00:05:09.376 "seek_hole": false, 00:05:09.376 "seek_data": false, 00:05:09.376 "copy": true, 00:05:09.376 "nvme_iov_md": false 00:05:09.376 }, 00:05:09.376 "memory_domains": [ 00:05:09.376 { 00:05:09.376 "dma_device_id": "system", 00:05:09.376 "dma_device_type": 1 00:05:09.376 }, 00:05:09.376 { 00:05:09.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.376 "dma_device_type": 2 00:05:09.376 } 00:05:09.376 ], 00:05:09.376 "driver_specific": {} 00:05:09.376 }, 00:05:09.376 { 00:05:09.376 "name": "Passthru0", 00:05:09.376 "aliases": [ 00:05:09.376 "b97cdc92-ad09-5d4f-aed0-f2e4d6d4c79d" 00:05:09.376 ], 00:05:09.376 "product_name": "passthru", 00:05:09.376 "block_size": 512, 00:05:09.376 "num_blocks": 16384, 00:05:09.376 "uuid": "b97cdc92-ad09-5d4f-aed0-f2e4d6d4c79d", 00:05:09.376 "assigned_rate_limits": { 00:05:09.376 "rw_ios_per_sec": 0, 00:05:09.376 "rw_mbytes_per_sec": 0, 00:05:09.376 "r_mbytes_per_sec": 0, 00:05:09.376 "w_mbytes_per_sec": 0 00:05:09.376 }, 00:05:09.376 "claimed": false, 00:05:09.376 "zoned": false, 00:05:09.376 "supported_io_types": { 00:05:09.376 "read": true, 00:05:09.376 "write": true, 00:05:09.376 "unmap": true, 00:05:09.376 "flush": true, 00:05:09.376 "reset": true, 00:05:09.376 "nvme_admin": false, 00:05:09.376 "nvme_io": false, 00:05:09.376 "nvme_io_md": false, 00:05:09.376 "write_zeroes": true, 00:05:09.376 "zcopy": true, 00:05:09.376 "get_zone_info": false, 00:05:09.376 "zone_management": false, 00:05:09.376 "zone_append": false, 00:05:09.376 "compare": false, 00:05:09.376 "compare_and_write": false, 00:05:09.376 "abort": true, 00:05:09.376 "seek_hole": false, 00:05:09.376 "seek_data": false, 00:05:09.376 "copy": true, 00:05:09.376 "nvme_iov_md": false 00:05:09.376 }, 00:05:09.376 "memory_domains": [ 00:05:09.376 { 00:05:09.376 "dma_device_id": "system", 00:05:09.376 "dma_device_type": 1 00:05:09.376 }, 00:05:09.376 { 00:05:09.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.376 "dma_device_type": 2 00:05:09.376 } 00:05:09.376 ], 00:05:09.376 "driver_specific": { 00:05:09.376 "passthru": { 00:05:09.376 "name": "Passthru0", 00:05:09.376 "base_bdev_name": "Malloc2" 00:05:09.376 } 00:05:09.376 } 00:05:09.376 } 00:05:09.376 ]' 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.376 00:05:09.376 real 0m0.222s 00:05:09.376 user 0m0.154s 00:05:09.376 sys 0m0.015s 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.376 03:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 ************************************ 00:05:09.376 END TEST rpc_daemon_integrity 00:05:09.376 ************************************ 00:05:09.634 03:48:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.634 03:48:24 rpc -- rpc/rpc.sh@84 -- # killprocess 696470 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@950 -- # '[' -z 696470 ']' 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@954 -- # kill -0 696470 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 696470 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 696470' 00:05:09.634 killing process with pid 696470 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@969 -- # kill 696470 00:05:09.634 03:48:24 rpc -- common/autotest_common.sh@974 -- # wait 696470 00:05:09.892 00:05:09.892 real 0m1.881s 00:05:09.892 user 0m2.402s 00:05:09.892 sys 0m0.559s 00:05:09.892 03:48:25 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.892 03:48:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.892 ************************************ 00:05:09.892 END TEST rpc 00:05:09.892 ************************************ 00:05:09.892 03:48:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:09.892 03:48:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.893 03:48:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.893 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:05:09.893 ************************************ 00:05:09.893 START TEST skip_rpc 00:05:09.893 ************************************ 00:05:09.893 03:48:25 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:10.151 * Looking for test storage... 00:05:10.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.151 03:48:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.151 03:48:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.151 03:48:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:10.151 03:48:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.151 03:48:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.151 03:48:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 ************************************ 00:05:10.151 START TEST skip_rpc 00:05:10.151 ************************************ 00:05:10.151 03:48:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:10.151 03:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=696900 00:05:10.151 03:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:10.151 03:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.151 03:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:10.151 [2024-07-25 03:48:25.313607] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:10.151 [2024-07-25 03:48:25.313703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696900 ] 00:05:10.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.151 [2024-07-25 03:48:25.344578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.151 [2024-07-25 03:48:25.376627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.414 [2024-07-25 03:48:25.469269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 696900 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 696900 ']' 00:05:15.720 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 696900 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 696900 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 696900' 00:05:15.721 killing process with pid 696900 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 696900 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 696900 00:05:15.721 00:05:15.721 real 0m5.453s 00:05:15.721 user 0m5.133s 00:05:15.721 sys 0m0.323s 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.721 03:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.721 ************************************ 00:05:15.721 END TEST skip_rpc 00:05:15.721 ************************************ 00:05:15.721 03:48:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.721 03:48:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.721 03:48:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.721 03:48:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.721 ************************************ 00:05:15.721 START TEST skip_rpc_with_json 00:05:15.721 ************************************ 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=697587 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 697587 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 697587 ']' 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.721 03:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.721 [2024-07-25 03:48:30.820361] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:15.721 [2024-07-25 03:48:30.820441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697587 ] 00:05:15.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.721 [2024-07-25 03:48:30.852218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.721 [2024-07-25 03:48:30.884148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.721 [2024-07-25 03:48:30.972554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.979 [2024-07-25 03:48:31.234171] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.979 request: 00:05:15.979 { 00:05:15.979 "trtype": "tcp", 00:05:15.979 "method": "nvmf_get_transports", 00:05:15.979 "req_id": 1 00:05:15.979 } 00:05:15.979 Got JSON-RPC error response 00:05:15.979 response: 00:05:15.979 { 00:05:15.979 "code": -19, 00:05:15.979 "message": "No such device" 00:05:15.979 } 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.979 [2024-07-25 03:48:31.242313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.979 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.237 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.237 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.237 { 00:05:16.237 "subsystems": [ 00:05:16.237 { 00:05:16.237 "subsystem": "vfio_user_target", 00:05:16.237 "config": null 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "keyring", 00:05:16.237 "config": [] 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "iobuf", 00:05:16.237 "config": [ 00:05:16.237 { 00:05:16.237 "method": "iobuf_set_options", 00:05:16.237 "params": { 00:05:16.237 "small_pool_count": 8192, 00:05:16.237 "large_pool_count": 1024, 00:05:16.237 "small_bufsize": 8192, 00:05:16.237 "large_bufsize": 135168 00:05:16.237 } 00:05:16.237 } 00:05:16.237 ] 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "sock", 00:05:16.237 "config": [ 00:05:16.237 { 00:05:16.237 "method": "sock_set_default_impl", 00:05:16.237 "params": { 00:05:16.237 "impl_name": "posix" 00:05:16.237 } 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "method": "sock_impl_set_options", 00:05:16.237 "params": { 00:05:16.237 "impl_name": "ssl", 00:05:16.237 "recv_buf_size": 4096, 00:05:16.237 "send_buf_size": 4096, 00:05:16.237 "enable_recv_pipe": true, 00:05:16.237 "enable_quickack": false, 00:05:16.237 "enable_placement_id": 0, 00:05:16.237 "enable_zerocopy_send_server": true, 00:05:16.237 "enable_zerocopy_send_client": false, 00:05:16.237 "zerocopy_threshold": 0, 00:05:16.237 "tls_version": 0, 00:05:16.237 "enable_ktls": false 00:05:16.237 } 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "method": "sock_impl_set_options", 00:05:16.237 "params": { 00:05:16.237 "impl_name": "posix", 00:05:16.237 "recv_buf_size": 2097152, 00:05:16.237 "send_buf_size": 2097152, 00:05:16.237 "enable_recv_pipe": true, 00:05:16.237 "enable_quickack": false, 00:05:16.237 "enable_placement_id": 0, 00:05:16.237 "enable_zerocopy_send_server": true, 00:05:16.237 "enable_zerocopy_send_client": false, 00:05:16.237 "zerocopy_threshold": 0, 00:05:16.237 "tls_version": 0, 00:05:16.237 "enable_ktls": false 00:05:16.237 } 00:05:16.237 } 00:05:16.237 ] 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "vmd", 00:05:16.237 "config": [] 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "accel", 00:05:16.237 "config": [ 00:05:16.237 { 00:05:16.237 "method": "accel_set_options", 00:05:16.237 "params": { 00:05:16.237 "small_cache_size": 128, 00:05:16.237 "large_cache_size": 16, 00:05:16.237 "task_count": 2048, 00:05:16.237 "sequence_count": 2048, 00:05:16.237 "buf_count": 2048 00:05:16.237 } 00:05:16.237 } 00:05:16.237 ] 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "subsystem": "bdev", 00:05:16.237 "config": [ 00:05:16.237 { 00:05:16.237 "method": "bdev_set_options", 00:05:16.237 "params": { 00:05:16.237 "bdev_io_pool_size": 65535, 00:05:16.237 "bdev_io_cache_size": 256, 00:05:16.237 "bdev_auto_examine": true, 00:05:16.237 "iobuf_small_cache_size": 128, 00:05:16.237 "iobuf_large_cache_size": 16 00:05:16.237 } 00:05:16.237 }, 00:05:16.237 { 00:05:16.237 "method": "bdev_raid_set_options", 00:05:16.237 "params": { 00:05:16.238 "process_window_size_kb": 1024, 00:05:16.238 "process_max_bandwidth_mb_sec": 0 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "bdev_iscsi_set_options", 00:05:16.238 "params": { 00:05:16.238 "timeout_sec": 30 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "bdev_nvme_set_options", 00:05:16.238 "params": { 00:05:16.238 "action_on_timeout": "none", 00:05:16.238 "timeout_us": 0, 00:05:16.238 "timeout_admin_us": 0, 00:05:16.238 "keep_alive_timeout_ms": 10000, 00:05:16.238 "arbitration_burst": 0, 00:05:16.238 "low_priority_weight": 0, 00:05:16.238 "medium_priority_weight": 0, 00:05:16.238 "high_priority_weight": 0, 00:05:16.238 "nvme_adminq_poll_period_us": 10000, 00:05:16.238 "nvme_ioq_poll_period_us": 0, 00:05:16.238 "io_queue_requests": 0, 00:05:16.238 "delay_cmd_submit": true, 00:05:16.238 "transport_retry_count": 4, 00:05:16.238 "bdev_retry_count": 3, 00:05:16.238 "transport_ack_timeout": 0, 00:05:16.238 "ctrlr_loss_timeout_sec": 0, 00:05:16.238 "reconnect_delay_sec": 0, 00:05:16.238 "fast_io_fail_timeout_sec": 0, 00:05:16.238 "disable_auto_failback": false, 00:05:16.238 "generate_uuids": false, 00:05:16.238 "transport_tos": 0, 00:05:16.238 "nvme_error_stat": false, 00:05:16.238 "rdma_srq_size": 0, 00:05:16.238 "io_path_stat": false, 00:05:16.238 "allow_accel_sequence": false, 00:05:16.238 "rdma_max_cq_size": 0, 00:05:16.238 "rdma_cm_event_timeout_ms": 0, 00:05:16.238 "dhchap_digests": [ 00:05:16.238 "sha256", 00:05:16.238 "sha384", 00:05:16.238 "sha512" 00:05:16.238 ], 00:05:16.238 "dhchap_dhgroups": [ 00:05:16.238 "null", 00:05:16.238 "ffdhe2048", 00:05:16.238 "ffdhe3072", 00:05:16.238 "ffdhe4096", 00:05:16.238 "ffdhe6144", 00:05:16.238 "ffdhe8192" 00:05:16.238 ] 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "bdev_nvme_set_hotplug", 00:05:16.238 "params": { 00:05:16.238 "period_us": 100000, 00:05:16.238 "enable": false 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "bdev_wait_for_examine" 00:05:16.238 } 00:05:16.238 ] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "scsi", 00:05:16.238 "config": null 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "scheduler", 00:05:16.238 "config": [ 00:05:16.238 { 00:05:16.238 "method": "framework_set_scheduler", 00:05:16.238 "params": { 00:05:16.238 "name": "static" 00:05:16.238 } 00:05:16.238 } 00:05:16.238 ] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "vhost_scsi", 00:05:16.238 "config": [] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "vhost_blk", 00:05:16.238 "config": [] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "ublk", 00:05:16.238 "config": [] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "nbd", 00:05:16.238 "config": [] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "nvmf", 00:05:16.238 "config": [ 00:05:16.238 { 00:05:16.238 "method": "nvmf_set_config", 00:05:16.238 "params": { 00:05:16.238 "discovery_filter": "match_any", 00:05:16.238 "admin_cmd_passthru": { 00:05:16.238 "identify_ctrlr": false 00:05:16.238 } 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "nvmf_set_max_subsystems", 00:05:16.238 "params": { 00:05:16.238 "max_subsystems": 1024 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "nvmf_set_crdt", 00:05:16.238 "params": { 00:05:16.238 "crdt1": 0, 00:05:16.238 "crdt2": 0, 00:05:16.238 "crdt3": 0 00:05:16.238 } 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "method": "nvmf_create_transport", 00:05:16.238 "params": { 00:05:16.238 "trtype": "TCP", 00:05:16.238 "max_queue_depth": 128, 00:05:16.238 "max_io_qpairs_per_ctrlr": 127, 00:05:16.238 "in_capsule_data_size": 4096, 00:05:16.238 "max_io_size": 131072, 00:05:16.238 "io_unit_size": 131072, 00:05:16.238 "max_aq_depth": 128, 00:05:16.238 "num_shared_buffers": 511, 00:05:16.238 "buf_cache_size": 4294967295, 00:05:16.238 "dif_insert_or_strip": false, 00:05:16.238 "zcopy": false, 00:05:16.238 "c2h_success": true, 00:05:16.238 "sock_priority": 0, 00:05:16.238 "abort_timeout_sec": 1, 00:05:16.238 "ack_timeout": 0, 00:05:16.238 "data_wr_pool_size": 0 00:05:16.238 } 00:05:16.238 } 00:05:16.238 ] 00:05:16.238 }, 00:05:16.238 { 00:05:16.238 "subsystem": "iscsi", 00:05:16.238 "config": [ 00:05:16.238 { 00:05:16.238 "method": "iscsi_set_options", 00:05:16.238 "params": { 00:05:16.238 "node_base": "iqn.2016-06.io.spdk", 00:05:16.238 "max_sessions": 128, 00:05:16.238 "max_connections_per_session": 2, 00:05:16.238 "max_queue_depth": 64, 00:05:16.238 "default_time2wait": 2, 00:05:16.238 "default_time2retain": 20, 00:05:16.238 "first_burst_length": 8192, 00:05:16.238 "immediate_data": true, 00:05:16.238 "allow_duplicated_isid": false, 00:05:16.238 "error_recovery_level": 0, 00:05:16.238 "nop_timeout": 60, 00:05:16.238 "nop_in_interval": 30, 00:05:16.238 "disable_chap": false, 00:05:16.238 "require_chap": false, 00:05:16.238 "mutual_chap": false, 00:05:16.238 "chap_group": 0, 00:05:16.238 "max_large_datain_per_connection": 64, 00:05:16.238 "max_r2t_per_connection": 4, 00:05:16.238 "pdu_pool_size": 36864, 00:05:16.238 "immediate_data_pool_size": 16384, 00:05:16.238 "data_out_pool_size": 2048 00:05:16.238 } 00:05:16.238 } 00:05:16.238 ] 00:05:16.238 } 00:05:16.238 ] 00:05:16.238 } 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 697587 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 697587 ']' 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 697587 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697587 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697587' 00:05:16.238 killing process with pid 697587 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 697587 00:05:16.238 03:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 697587 00:05:16.803 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=697729 00:05:16.803 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.803 03:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 697729 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 697729 ']' 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 697729 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697729 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697729' 00:05:22.067 killing process with pid 697729 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 697729 00:05:22.067 03:48:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 697729 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.067 00:05:22.067 real 0m6.513s 00:05:22.067 user 0m6.101s 00:05:22.067 sys 0m0.692s 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.067 ************************************ 00:05:22.067 END TEST skip_rpc_with_json 00:05:22.067 ************************************ 00:05:22.067 03:48:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.067 03:48:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.067 03:48:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.067 03:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.067 ************************************ 00:05:22.067 START TEST skip_rpc_with_delay 00:05:22.067 ************************************ 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.067 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.324 [2024-07-25 03:48:37.391105] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.324 [2024-07-25 03:48:37.391211] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.324 00:05:22.324 real 0m0.072s 00:05:22.324 user 0m0.046s 00:05:22.324 sys 0m0.025s 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.324 03:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.324 ************************************ 00:05:22.324 END TEST skip_rpc_with_delay 00:05:22.324 ************************************ 00:05:22.324 03:48:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.324 03:48:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.324 03:48:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.324 03:48:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.324 03:48:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.324 03:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.324 ************************************ 00:05:22.324 START TEST exit_on_failed_rpc_init 00:05:22.324 ************************************ 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=698447 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 698447 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 698447 ']' 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.324 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.324 [2024-07-25 03:48:37.510136] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:22.324 [2024-07-25 03:48:37.510223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698447 ] 00:05:22.324 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.324 [2024-07-25 03:48:37.540806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.324 [2024-07-25 03:48:37.572604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.582 [2024-07-25 03:48:37.663525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.840 03:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.840 [2024-07-25 03:48:37.976066] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:22.840 [2024-07-25 03:48:37.976142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698457 ] 00:05:22.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.840 [2024-07-25 03:48:38.005413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.840 [2024-07-25 03:48:38.036652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.840 [2024-07-25 03:48:38.130923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.840 [2024-07-25 03:48:38.131030] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:22.840 [2024-07-25 03:48:38.131051] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:22.840 [2024-07-25 03:48:38.131066] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 698447 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 698447 ']' 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 698447 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 698447 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 698447' 00:05:23.097 killing process with pid 698447 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 698447 00:05:23.097 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 698447 00:05:23.663 00:05:23.663 real 0m1.213s 00:05:23.663 user 0m1.307s 00:05:23.663 sys 0m0.459s 00:05:23.663 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.663 03:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 ************************************ 00:05:23.663 END TEST exit_on_failed_rpc_init 00:05:23.663 ************************************ 00:05:23.663 03:48:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.663 00:05:23.663 real 0m13.509s 00:05:23.663 user 0m12.672s 00:05:23.663 sys 0m1.692s 00:05:23.663 03:48:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.663 03:48:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 ************************************ 00:05:23.663 END TEST skip_rpc 00:05:23.663 ************************************ 00:05:23.663 03:48:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.663 03:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.663 03:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.663 03:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 ************************************ 00:05:23.663 START TEST rpc_client 00:05:23.663 ************************************ 00:05:23.663 03:48:38 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.663 * Looking for test storage... 00:05:23.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:23.663 03:48:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:23.663 OK 00:05:23.663 03:48:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.663 00:05:23.663 real 0m0.070s 00:05:23.663 user 0m0.034s 00:05:23.663 sys 0m0.040s 00:05:23.663 03:48:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.663 03:48:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 ************************************ 00:05:23.663 END TEST rpc_client 00:05:23.663 ************************************ 00:05:23.663 03:48:38 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.663 03:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.663 03:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.663 03:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 ************************************ 00:05:23.663 START TEST json_config 00:05:23.663 ************************************ 00:05:23.663 03:48:38 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.663 03:48:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.663 03:48:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.663 03:48:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.663 03:48:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.663 03:48:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.663 03:48:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.663 03:48:38 json_config -- paths/export.sh@5 -- # export PATH 00:05:23.663 03:48:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@47 -- # : 0 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.663 03:48:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:23.663 INFO: JSON configuration test init 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:23.663 03:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.663 03:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.663 03:48:38 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:23.663 03:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.664 03:48:38 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:23.664 03:48:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.664 03:48:38 json_config -- json_config/common.sh@10 -- # shift 00:05:23.664 03:48:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.664 03:48:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.664 03:48:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.664 03:48:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.664 03:48:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.664 03:48:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=698699 00:05:23.664 03:48:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:23.664 03:48:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.664 Waiting for target to run... 00:05:23.664 03:48:38 json_config -- json_config/common.sh@25 -- # waitforlisten 698699 /var/tmp/spdk_tgt.sock 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 698699 ']' 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.664 03:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.921 [2024-07-25 03:48:38.967693] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:23.921 [2024-07-25 03:48:38.967793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698699 ] 00:05:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.179 [2024-07-25 03:48:39.274598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.179 [2024-07-25 03:48:39.307884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.179 [2024-07-25 03:48:39.371079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:24.743 03:48:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.743 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.743 03:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.743 03:48:39 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:24.743 03:48:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.017 03:48:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.017 03:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.017 03:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.017 03:48:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@51 -- # sort 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:28.275 03:48:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.275 03:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:28.275 03:48:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.275 03:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:28.275 03:48:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.275 03:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.534 MallocForNvmf0 00:05:28.534 03:48:43 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.534 03:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.791 MallocForNvmf1 00:05:28.791 03:48:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.791 03:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.048 [2024-07-25 03:48:44.098906] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.048 03:48:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.048 03:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.306 03:48:44 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.306 03:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.306 03:48:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.306 03:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.563 03:48:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.563 03:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.820 [2024-07-25 03:48:45.082114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.820 03:48:45 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:29.820 03:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.820 03:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.078 03:48:45 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:30.078 03:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.078 03:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.078 03:48:45 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:30.078 03:48:45 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.078 03:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.078 MallocBdevForConfigChangeCheck 00:05:30.334 03:48:45 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:30.334 03:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.334 03:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.334 03:48:45 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:30.334 03:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.591 03:48:45 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:30.591 INFO: shutting down applications... 00:05:30.591 03:48:45 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:30.591 03:48:45 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:30.591 03:48:45 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:30.591 03:48:45 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:32.492 Calling clear_iscsi_subsystem 00:05:32.492 Calling clear_nvmf_subsystem 00:05:32.492 Calling clear_nbd_subsystem 00:05:32.492 Calling clear_ublk_subsystem 00:05:32.492 Calling clear_vhost_blk_subsystem 00:05:32.492 Calling clear_vhost_scsi_subsystem 00:05:32.492 Calling clear_bdev_subsystem 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:32.492 03:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:32.781 03:48:47 json_config -- json_config/json_config.sh@349 -- # break 00:05:32.781 03:48:47 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:32.781 03:48:47 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:32.781 03:48:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:32.781 03:48:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.781 03:48:47 json_config -- json_config/common.sh@35 -- # [[ -n 698699 ]] 00:05:32.781 03:48:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 698699 00:05:32.781 03:48:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.781 03:48:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.781 03:48:47 json_config -- json_config/common.sh@41 -- # kill -0 698699 00:05:32.781 03:48:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.349 03:48:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.349 03:48:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.349 03:48:48 json_config -- json_config/common.sh@41 -- # kill -0 698699 00:05:33.349 03:48:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.349 03:48:48 json_config -- json_config/common.sh@43 -- # break 00:05:33.349 03:48:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.349 03:48:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.349 SPDK target shutdown done 00:05:33.349 03:48:48 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:33.349 INFO: relaunching applications... 00:05:33.349 03:48:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.349 03:48:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.349 03:48:48 json_config -- json_config/common.sh@10 -- # shift 00:05:33.349 03:48:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.349 03:48:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.349 03:48:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.349 03:48:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.349 03:48:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.349 03:48:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=700015 00:05:33.349 03:48:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.349 03:48:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.349 Waiting for target to run... 00:05:33.349 03:48:48 json_config -- json_config/common.sh@25 -- # waitforlisten 700015 /var/tmp/spdk_tgt.sock 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 700015 ']' 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.349 03:48:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.349 [2024-07-25 03:48:48.410383] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:33.349 [2024-07-25 03:48:48.410468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700015 ] 00:05:33.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.607 [2024-07-25 03:48:48.885646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.865 [2024-07-25 03:48:48.919617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.865 [2024-07-25 03:48:49.001747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.147 [2024-07-25 03:48:52.035713] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.147 [2024-07-25 03:48:52.068168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.713 03:48:52 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.713 03:48:52 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:37.713 03:48:52 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.713 00:05:37.713 03:48:52 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:37.713 03:48:52 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.713 INFO: Checking if target configuration is the same... 00:05:37.713 03:48:52 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.713 03:48:52 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:37.713 03:48:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.713 + '[' 2 -ne 2 ']' 00:05:37.713 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.713 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.713 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.713 +++ basename /dev/fd/62 00:05:37.713 ++ mktemp /tmp/62.XXX 00:05:37.713 + tmp_file_1=/tmp/62.DLB 00:05:37.713 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.713 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.713 + tmp_file_2=/tmp/spdk_tgt_config.json.Cl8 00:05:37.713 + ret=0 00:05:37.713 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.971 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.242 + diff -u /tmp/62.DLB /tmp/spdk_tgt_config.json.Cl8 00:05:38.242 + echo 'INFO: JSON config files are the same' 00:05:38.242 INFO: JSON config files are the same 00:05:38.242 + rm /tmp/62.DLB /tmp/spdk_tgt_config.json.Cl8 00:05:38.242 + exit 0 00:05:38.242 03:48:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:38.242 03:48:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.242 INFO: changing configuration and checking if this can be detected... 00:05:38.242 03:48:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.242 03:48:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.242 03:48:53 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.242 03:48:53 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:38.242 03:48:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.242 + '[' 2 -ne 2 ']' 00:05:38.242 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.242 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.242 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.242 +++ basename /dev/fd/62 00:05:38.242 ++ mktemp /tmp/62.XXX 00:05:38.504 + tmp_file_1=/tmp/62.mL2 00:05:38.504 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.504 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.504 + tmp_file_2=/tmp/spdk_tgt_config.json.Fvl 00:05:38.504 + ret=0 00:05:38.504 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.762 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.762 + diff -u /tmp/62.mL2 /tmp/spdk_tgt_config.json.Fvl 00:05:38.762 + ret=1 00:05:38.762 + echo '=== Start of file: /tmp/62.mL2 ===' 00:05:38.762 + cat /tmp/62.mL2 00:05:38.762 + echo '=== End of file: /tmp/62.mL2 ===' 00:05:38.762 + echo '' 00:05:38.762 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fvl ===' 00:05:38.762 + cat /tmp/spdk_tgt_config.json.Fvl 00:05:38.762 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fvl ===' 00:05:38.762 + echo '' 00:05:38.762 + rm /tmp/62.mL2 /tmp/spdk_tgt_config.json.Fvl 00:05:38.762 + exit 1 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:38.762 INFO: configuration change detected. 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@321 -- # [[ -n 700015 ]] 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.762 03:48:53 json_config -- json_config/json_config.sh@327 -- # killprocess 700015 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@950 -- # '[' -z 700015 ']' 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@954 -- # kill -0 700015 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@955 -- # uname 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.762 03:48:53 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 700015 00:05:38.762 03:48:54 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.762 03:48:54 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.762 03:48:54 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 700015' 00:05:38.762 killing process with pid 700015 00:05:38.762 03:48:54 json_config -- common/autotest_common.sh@969 -- # kill 700015 00:05:38.762 03:48:54 json_config -- common/autotest_common.sh@974 -- # wait 700015 00:05:40.662 03:48:55 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.662 03:48:55 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:40.662 03:48:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.662 03:48:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.662 03:48:55 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:40.662 03:48:55 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:40.662 INFO: Success 00:05:40.662 00:05:40.662 real 0m16.785s 00:05:40.662 user 0m18.754s 00:05:40.662 sys 0m2.054s 00:05:40.662 03:48:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.662 03:48:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.662 ************************************ 00:05:40.662 END TEST json_config 00:05:40.662 ************************************ 00:05:40.662 03:48:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.662 03:48:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.662 03:48:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.662 03:48:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.662 ************************************ 00:05:40.662 START TEST json_config_extra_key 00:05:40.662 ************************************ 00:05:40.662 03:48:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.663 03:48:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.663 03:48:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.663 03:48:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.663 03:48:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.663 03:48:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.663 03:48:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.663 03:48:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:40.663 03:48:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.663 03:48:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.663 INFO: launching applications... 00:05:40.663 03:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=700942 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.663 Waiting for target to run... 00:05:40.663 03:48:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 700942 /var/tmp/spdk_tgt.sock 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 700942 ']' 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.663 03:48:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.663 [2024-07-25 03:48:55.800206] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:40.663 [2024-07-25 03:48:55.800301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700942 ] 00:05:40.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.229 [2024-07-25 03:48:56.275972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.229 [2024-07-25 03:48:56.309835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.229 [2024-07-25 03:48:56.388005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.487 03:48:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.487 03:48:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.487 00:05:41.487 03:48:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.487 INFO: shutting down applications... 00:05:41.487 03:48:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 700942 ]] 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 700942 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 700942 00:05:41.487 03:48:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 700942 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.053 03:48:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.053 SPDK target shutdown done 00:05:42.053 03:48:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.053 Success 00:05:42.053 00:05:42.053 real 0m1.585s 00:05:42.053 user 0m1.432s 00:05:42.053 sys 0m0.574s 00:05:42.053 03:48:57 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.053 03:48:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.053 ************************************ 00:05:42.053 END TEST json_config_extra_key 00:05:42.053 ************************************ 00:05:42.053 03:48:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.053 03:48:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.053 03:48:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.053 03:48:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.053 ************************************ 00:05:42.053 START TEST alias_rpc 00:05:42.053 ************************************ 00:05:42.053 03:48:57 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.312 * Looking for test storage... 00:05:42.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:42.312 03:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.312 03:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=701245 00:05:42.312 03:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.312 03:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 701245 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 701245 ']' 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.312 03:48:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.312 [2024-07-25 03:48:57.421963] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:42.312 [2024-07-25 03:48:57.422052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701245 ] 00:05:42.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.312 [2024-07-25 03:48:57.452916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.312 [2024-07-25 03:48:57.483118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.312 [2024-07-25 03:48:57.574887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.570 03:48:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.570 03:48:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.570 03:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.136 03:48:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 701245 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 701245 ']' 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 701245 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701245 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701245' 00:05:43.136 killing process with pid 701245 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@969 -- # kill 701245 00:05:43.136 03:48:58 alias_rpc -- common/autotest_common.sh@974 -- # wait 701245 00:05:43.393 00:05:43.393 real 0m1.254s 00:05:43.393 user 0m1.390s 00:05:43.393 sys 0m0.407s 00:05:43.393 03:48:58 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.393 03:48:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.393 ************************************ 00:05:43.393 END TEST alias_rpc 00:05:43.393 ************************************ 00:05:43.393 03:48:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:43.393 03:48:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.393 03:48:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.393 03:48:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.393 03:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.393 ************************************ 00:05:43.393 START TEST spdkcli_tcp 00:05:43.393 ************************************ 00:05:43.393 03:48:58 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.393 * Looking for test storage... 00:05:43.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.393 03:48:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.393 03:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=701437 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.393 03:48:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 701437 00:05:43.393 03:48:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 701437 ']' 00:05:43.394 03:48:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.394 03:48:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.394 03:48:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.394 03:48:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.394 03:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 [2024-07-25 03:48:58.731427] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:43.652 [2024-07-25 03:48:58.731512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701437 ] 00:05:43.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.652 [2024-07-25 03:48:58.764007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.652 [2024-07-25 03:48:58.792622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.652 [2024-07-25 03:48:58.879018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.652 [2024-07-25 03:48:58.879021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.910 03:48:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.910 03:48:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:43.910 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=701446 00:05:43.910 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.910 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.168 [ 00:05:44.168 "bdev_malloc_delete", 00:05:44.168 "bdev_malloc_create", 00:05:44.168 "bdev_null_resize", 00:05:44.168 "bdev_null_delete", 00:05:44.168 "bdev_null_create", 00:05:44.168 "bdev_nvme_cuse_unregister", 00:05:44.168 "bdev_nvme_cuse_register", 00:05:44.168 "bdev_opal_new_user", 00:05:44.168 "bdev_opal_set_lock_state", 00:05:44.168 "bdev_opal_delete", 00:05:44.168 "bdev_opal_get_info", 00:05:44.168 "bdev_opal_create", 00:05:44.168 "bdev_nvme_opal_revert", 00:05:44.168 "bdev_nvme_opal_init", 00:05:44.168 "bdev_nvme_send_cmd", 00:05:44.169 "bdev_nvme_get_path_iostat", 00:05:44.169 "bdev_nvme_get_mdns_discovery_info", 00:05:44.169 "bdev_nvme_stop_mdns_discovery", 00:05:44.169 "bdev_nvme_start_mdns_discovery", 00:05:44.169 "bdev_nvme_set_multipath_policy", 00:05:44.169 "bdev_nvme_set_preferred_path", 00:05:44.169 "bdev_nvme_get_io_paths", 00:05:44.169 "bdev_nvme_remove_error_injection", 00:05:44.169 "bdev_nvme_add_error_injection", 00:05:44.169 "bdev_nvme_get_discovery_info", 00:05:44.169 "bdev_nvme_stop_discovery", 00:05:44.169 "bdev_nvme_start_discovery", 00:05:44.169 "bdev_nvme_get_controller_health_info", 00:05:44.169 "bdev_nvme_disable_controller", 00:05:44.169 "bdev_nvme_enable_controller", 00:05:44.169 "bdev_nvme_reset_controller", 00:05:44.169 "bdev_nvme_get_transport_statistics", 00:05:44.169 "bdev_nvme_apply_firmware", 00:05:44.169 "bdev_nvme_detach_controller", 00:05:44.169 "bdev_nvme_get_controllers", 00:05:44.169 "bdev_nvme_attach_controller", 00:05:44.169 "bdev_nvme_set_hotplug", 00:05:44.169 "bdev_nvme_set_options", 00:05:44.169 "bdev_passthru_delete", 00:05:44.169 "bdev_passthru_create", 00:05:44.169 "bdev_lvol_set_parent_bdev", 00:05:44.169 "bdev_lvol_set_parent", 00:05:44.169 "bdev_lvol_check_shallow_copy", 00:05:44.169 "bdev_lvol_start_shallow_copy", 00:05:44.169 "bdev_lvol_grow_lvstore", 00:05:44.169 "bdev_lvol_get_lvols", 00:05:44.169 "bdev_lvol_get_lvstores", 00:05:44.169 "bdev_lvol_delete", 00:05:44.169 "bdev_lvol_set_read_only", 00:05:44.169 "bdev_lvol_resize", 00:05:44.169 "bdev_lvol_decouple_parent", 00:05:44.169 "bdev_lvol_inflate", 00:05:44.169 "bdev_lvol_rename", 00:05:44.169 "bdev_lvol_clone_bdev", 00:05:44.169 "bdev_lvol_clone", 00:05:44.169 "bdev_lvol_snapshot", 00:05:44.169 "bdev_lvol_create", 00:05:44.169 "bdev_lvol_delete_lvstore", 00:05:44.169 "bdev_lvol_rename_lvstore", 00:05:44.169 "bdev_lvol_create_lvstore", 00:05:44.169 "bdev_raid_set_options", 00:05:44.169 "bdev_raid_remove_base_bdev", 00:05:44.169 "bdev_raid_add_base_bdev", 00:05:44.169 "bdev_raid_delete", 00:05:44.169 "bdev_raid_create", 00:05:44.169 "bdev_raid_get_bdevs", 00:05:44.169 "bdev_error_inject_error", 00:05:44.169 "bdev_error_delete", 00:05:44.169 "bdev_error_create", 00:05:44.169 "bdev_split_delete", 00:05:44.169 "bdev_split_create", 00:05:44.169 "bdev_delay_delete", 00:05:44.169 "bdev_delay_create", 00:05:44.169 "bdev_delay_update_latency", 00:05:44.169 "bdev_zone_block_delete", 00:05:44.169 "bdev_zone_block_create", 00:05:44.169 "blobfs_create", 00:05:44.169 "blobfs_detect", 00:05:44.169 "blobfs_set_cache_size", 00:05:44.169 "bdev_aio_delete", 00:05:44.169 "bdev_aio_rescan", 00:05:44.169 "bdev_aio_create", 00:05:44.169 "bdev_ftl_set_property", 00:05:44.169 "bdev_ftl_get_properties", 00:05:44.169 "bdev_ftl_get_stats", 00:05:44.169 "bdev_ftl_unmap", 00:05:44.169 "bdev_ftl_unload", 00:05:44.169 "bdev_ftl_delete", 00:05:44.169 "bdev_ftl_load", 00:05:44.169 "bdev_ftl_create", 00:05:44.169 "bdev_virtio_attach_controller", 00:05:44.169 "bdev_virtio_scsi_get_devices", 00:05:44.169 "bdev_virtio_detach_controller", 00:05:44.169 "bdev_virtio_blk_set_hotplug", 00:05:44.169 "bdev_iscsi_delete", 00:05:44.169 "bdev_iscsi_create", 00:05:44.169 "bdev_iscsi_set_options", 00:05:44.169 "accel_error_inject_error", 00:05:44.169 "ioat_scan_accel_module", 00:05:44.169 "dsa_scan_accel_module", 00:05:44.169 "iaa_scan_accel_module", 00:05:44.169 "vfu_virtio_create_scsi_endpoint", 00:05:44.169 "vfu_virtio_scsi_remove_target", 00:05:44.169 "vfu_virtio_scsi_add_target", 00:05:44.169 "vfu_virtio_create_blk_endpoint", 00:05:44.169 "vfu_virtio_delete_endpoint", 00:05:44.169 "keyring_file_remove_key", 00:05:44.169 "keyring_file_add_key", 00:05:44.169 "keyring_linux_set_options", 00:05:44.169 "iscsi_get_histogram", 00:05:44.169 "iscsi_enable_histogram", 00:05:44.169 "iscsi_set_options", 00:05:44.169 "iscsi_get_auth_groups", 00:05:44.169 "iscsi_auth_group_remove_secret", 00:05:44.169 "iscsi_auth_group_add_secret", 00:05:44.169 "iscsi_delete_auth_group", 00:05:44.169 "iscsi_create_auth_group", 00:05:44.169 "iscsi_set_discovery_auth", 00:05:44.169 "iscsi_get_options", 00:05:44.169 "iscsi_target_node_request_logout", 00:05:44.169 "iscsi_target_node_set_redirect", 00:05:44.169 "iscsi_target_node_set_auth", 00:05:44.169 "iscsi_target_node_add_lun", 00:05:44.169 "iscsi_get_stats", 00:05:44.169 "iscsi_get_connections", 00:05:44.169 "iscsi_portal_group_set_auth", 00:05:44.169 "iscsi_start_portal_group", 00:05:44.169 "iscsi_delete_portal_group", 00:05:44.169 "iscsi_create_portal_group", 00:05:44.169 "iscsi_get_portal_groups", 00:05:44.169 "iscsi_delete_target_node", 00:05:44.169 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.169 "iscsi_target_node_add_pg_ig_maps", 00:05:44.169 "iscsi_create_target_node", 00:05:44.169 "iscsi_get_target_nodes", 00:05:44.169 "iscsi_delete_initiator_group", 00:05:44.169 "iscsi_initiator_group_remove_initiators", 00:05:44.169 "iscsi_initiator_group_add_initiators", 00:05:44.169 "iscsi_create_initiator_group", 00:05:44.169 "iscsi_get_initiator_groups", 00:05:44.169 "nvmf_set_crdt", 00:05:44.169 "nvmf_set_config", 00:05:44.169 "nvmf_set_max_subsystems", 00:05:44.169 "nvmf_stop_mdns_prr", 00:05:44.169 "nvmf_publish_mdns_prr", 00:05:44.169 "nvmf_subsystem_get_listeners", 00:05:44.169 "nvmf_subsystem_get_qpairs", 00:05:44.169 "nvmf_subsystem_get_controllers", 00:05:44.169 "nvmf_get_stats", 00:05:44.169 "nvmf_get_transports", 00:05:44.169 "nvmf_create_transport", 00:05:44.169 "nvmf_get_targets", 00:05:44.169 "nvmf_delete_target", 00:05:44.169 "nvmf_create_target", 00:05:44.169 "nvmf_subsystem_allow_any_host", 00:05:44.169 "nvmf_subsystem_remove_host", 00:05:44.169 "nvmf_subsystem_add_host", 00:05:44.169 "nvmf_ns_remove_host", 00:05:44.169 "nvmf_ns_add_host", 00:05:44.169 "nvmf_subsystem_remove_ns", 00:05:44.169 "nvmf_subsystem_add_ns", 00:05:44.169 "nvmf_subsystem_listener_set_ana_state", 00:05:44.169 "nvmf_discovery_get_referrals", 00:05:44.169 "nvmf_discovery_remove_referral", 00:05:44.169 "nvmf_discovery_add_referral", 00:05:44.169 "nvmf_subsystem_remove_listener", 00:05:44.169 "nvmf_subsystem_add_listener", 00:05:44.169 "nvmf_delete_subsystem", 00:05:44.169 "nvmf_create_subsystem", 00:05:44.169 "nvmf_get_subsystems", 00:05:44.169 "env_dpdk_get_mem_stats", 00:05:44.169 "nbd_get_disks", 00:05:44.169 "nbd_stop_disk", 00:05:44.169 "nbd_start_disk", 00:05:44.169 "ublk_recover_disk", 00:05:44.169 "ublk_get_disks", 00:05:44.169 "ublk_stop_disk", 00:05:44.169 "ublk_start_disk", 00:05:44.169 "ublk_destroy_target", 00:05:44.169 "ublk_create_target", 00:05:44.169 "virtio_blk_create_transport", 00:05:44.169 "virtio_blk_get_transports", 00:05:44.169 "vhost_controller_set_coalescing", 00:05:44.169 "vhost_get_controllers", 00:05:44.169 "vhost_delete_controller", 00:05:44.169 "vhost_create_blk_controller", 00:05:44.169 "vhost_scsi_controller_remove_target", 00:05:44.169 "vhost_scsi_controller_add_target", 00:05:44.169 "vhost_start_scsi_controller", 00:05:44.169 "vhost_create_scsi_controller", 00:05:44.169 "thread_set_cpumask", 00:05:44.169 "framework_get_governor", 00:05:44.169 "framework_get_scheduler", 00:05:44.169 "framework_set_scheduler", 00:05:44.169 "framework_get_reactors", 00:05:44.169 "thread_get_io_channels", 00:05:44.169 "thread_get_pollers", 00:05:44.169 "thread_get_stats", 00:05:44.169 "framework_monitor_context_switch", 00:05:44.169 "spdk_kill_instance", 00:05:44.169 "log_enable_timestamps", 00:05:44.169 "log_get_flags", 00:05:44.169 "log_clear_flag", 00:05:44.169 "log_set_flag", 00:05:44.169 "log_get_level", 00:05:44.169 "log_set_level", 00:05:44.169 "log_get_print_level", 00:05:44.169 "log_set_print_level", 00:05:44.169 "framework_enable_cpumask_locks", 00:05:44.169 "framework_disable_cpumask_locks", 00:05:44.169 "framework_wait_init", 00:05:44.169 "framework_start_init", 00:05:44.169 "scsi_get_devices", 00:05:44.169 "bdev_get_histogram", 00:05:44.169 "bdev_enable_histogram", 00:05:44.169 "bdev_set_qos_limit", 00:05:44.169 "bdev_set_qd_sampling_period", 00:05:44.169 "bdev_get_bdevs", 00:05:44.169 "bdev_reset_iostat", 00:05:44.170 "bdev_get_iostat", 00:05:44.170 "bdev_examine", 00:05:44.170 "bdev_wait_for_examine", 00:05:44.170 "bdev_set_options", 00:05:44.170 "notify_get_notifications", 00:05:44.170 "notify_get_types", 00:05:44.170 "accel_get_stats", 00:05:44.170 "accel_set_options", 00:05:44.170 "accel_set_driver", 00:05:44.170 "accel_crypto_key_destroy", 00:05:44.170 "accel_crypto_keys_get", 00:05:44.170 "accel_crypto_key_create", 00:05:44.170 "accel_assign_opc", 00:05:44.170 "accel_get_module_info", 00:05:44.170 "accel_get_opc_assignments", 00:05:44.170 "vmd_rescan", 00:05:44.170 "vmd_remove_device", 00:05:44.170 "vmd_enable", 00:05:44.170 "sock_get_default_impl", 00:05:44.170 "sock_set_default_impl", 00:05:44.170 "sock_impl_set_options", 00:05:44.170 "sock_impl_get_options", 00:05:44.170 "iobuf_get_stats", 00:05:44.170 "iobuf_set_options", 00:05:44.170 "keyring_get_keys", 00:05:44.170 "framework_get_pci_devices", 00:05:44.170 "framework_get_config", 00:05:44.170 "framework_get_subsystems", 00:05:44.170 "vfu_tgt_set_base_path", 00:05:44.170 "trace_get_info", 00:05:44.170 "trace_get_tpoint_group_mask", 00:05:44.170 "trace_disable_tpoint_group", 00:05:44.170 "trace_enable_tpoint_group", 00:05:44.170 "trace_clear_tpoint_mask", 00:05:44.170 "trace_set_tpoint_mask", 00:05:44.170 "spdk_get_version", 00:05:44.170 "rpc_get_methods" 00:05:44.170 ] 00:05:44.170 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.170 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.170 03:48:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 701437 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 701437 ']' 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 701437 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701437 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701437' 00:05:44.170 killing process with pid 701437 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 701437 00:05:44.170 03:48:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 701437 00:05:44.736 00:05:44.736 real 0m1.192s 00:05:44.736 user 0m2.110s 00:05:44.736 sys 0m0.448s 00:05:44.736 03:48:59 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.736 03:48:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.736 ************************************ 00:05:44.736 END TEST spdkcli_tcp 00:05:44.736 ************************************ 00:05:44.736 03:48:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.736 03:48:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.736 03:48:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.736 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.736 ************************************ 00:05:44.736 START TEST dpdk_mem_utility 00:05:44.736 ************************************ 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.736 * Looking for test storage... 00:05:44.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:44.736 03:48:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.736 03:48:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=701641 00:05:44.736 03:48:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.736 03:48:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 701641 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 701641 ']' 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.736 03:48:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.736 [2024-07-25 03:48:59.965925] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:44.736 [2024-07-25 03:48:59.966008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701641 ] 00:05:44.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.736 [2024-07-25 03:48:59.996725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.736 [2024-07-25 03:49:00.026620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.994 [2024-07-25 03:49:00.113818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.253 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.253 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:45.253 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.253 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.253 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.253 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.253 { 00:05:45.253 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.253 } 00:05:45.253 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.253 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.253 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:45.253 1 heaps totaling size 814.000000 MiB 00:05:45.253 size: 814.000000 MiB heap id: 0 00:05:45.253 end heaps---------- 00:05:45.253 8 mempools totaling size 598.116089 MiB 00:05:45.253 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.253 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.253 size: 84.521057 MiB name: bdev_io_701641 00:05:45.253 size: 51.011292 MiB name: evtpool_701641 00:05:45.253 size: 50.003479 MiB name: msgpool_701641 00:05:45.253 size: 21.763794 MiB name: PDU_Pool 00:05:45.253 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.253 size: 0.026123 MiB name: Session_Pool 00:05:45.253 end mempools------- 00:05:45.253 6 memzones totaling size 4.142822 MiB 00:05:45.253 size: 1.000366 MiB name: RG_ring_0_701641 00:05:45.253 size: 1.000366 MiB name: RG_ring_1_701641 00:05:45.253 size: 1.000366 MiB name: RG_ring_4_701641 00:05:45.253 size: 1.000366 MiB name: RG_ring_5_701641 00:05:45.253 size: 0.125366 MiB name: RG_ring_2_701641 00:05:45.253 size: 0.015991 MiB name: RG_ring_3_701641 00:05:45.253 end memzones------- 00:05:45.253 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.253 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:45.253 list of free elements. size: 12.519348 MiB 00:05:45.253 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:45.253 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:45.253 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:45.253 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:45.253 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:45.253 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:45.253 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:45.253 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:45.253 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:45.253 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:45.253 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:45.253 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:45.253 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:45.253 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:45.253 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:45.253 list of standard malloc elements. size: 199.218079 MiB 00:05:45.253 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:45.253 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:45.253 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:45.253 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:45.253 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:45.253 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:45.253 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:45.253 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:45.253 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:45.253 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:45.253 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:45.253 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:45.253 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:45.253 list of memzone associated elements. size: 602.262573 MiB 00:05:45.253 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:45.253 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.253 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:45.253 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.253 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:45.253 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_701641_0 00:05:45.253 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:45.253 associated memzone info: size: 48.002930 MiB name: MP_evtpool_701641_0 00:05:45.253 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:45.253 associated memzone info: size: 48.002930 MiB name: MP_msgpool_701641_0 00:05:45.253 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:45.253 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.253 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:45.253 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.253 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:45.253 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_701641 00:05:45.253 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:45.253 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_701641 00:05:45.253 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:45.253 associated memzone info: size: 1.007996 MiB name: MP_evtpool_701641 00:05:45.253 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:45.253 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.253 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:45.253 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.253 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:45.253 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.253 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:45.253 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.253 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:45.253 associated memzone info: size: 1.000366 MiB name: RG_ring_0_701641 00:05:45.253 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:45.253 associated memzone info: size: 1.000366 MiB name: RG_ring_1_701641 00:05:45.253 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:45.253 associated memzone info: size: 1.000366 MiB name: RG_ring_4_701641 00:05:45.253 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:45.253 associated memzone info: size: 1.000366 MiB name: RG_ring_5_701641 00:05:45.253 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:45.253 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_701641 00:05:45.253 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:45.253 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.253 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:45.253 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.253 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:45.253 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.253 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:45.253 associated memzone info: size: 0.125366 MiB name: RG_ring_2_701641 00:05:45.253 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:45.253 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.253 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:45.253 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.254 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:45.254 associated memzone info: size: 0.015991 MiB name: RG_ring_3_701641 00:05:45.254 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:45.254 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.254 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:45.254 associated memzone info: size: 0.000183 MiB name: MP_msgpool_701641 00:05:45.254 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:45.254 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_701641 00:05:45.254 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:45.254 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.254 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.254 03:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 701641 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 701641 ']' 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 701641 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701641 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701641' 00:05:45.254 killing process with pid 701641 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 701641 00:05:45.254 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 701641 00:05:45.819 00:05:45.820 real 0m1.049s 00:05:45.820 user 0m1.015s 00:05:45.820 sys 0m0.411s 00:05:45.820 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.820 03:49:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.820 ************************************ 00:05:45.820 END TEST dpdk_mem_utility 00:05:45.820 ************************************ 00:05:45.820 03:49:00 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:45.820 03:49:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.820 03:49:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.820 03:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:45.820 ************************************ 00:05:45.820 START TEST event 00:05:45.820 ************************************ 00:05:45.820 03:49:00 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:45.820 * Looking for test storage... 00:05:45.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:45.820 03:49:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:45.820 03:49:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.820 03:49:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.820 03:49:01 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:45.820 03:49:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.820 03:49:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.820 ************************************ 00:05:45.820 START TEST event_perf 00:05:45.820 ************************************ 00:05:45.820 03:49:01 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.820 Running I/O for 1 seconds...[2024-07-25 03:49:01.053182] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:45.820 [2024-07-25 03:49:01.053257] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701827 ] 00:05:45.820 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.820 [2024-07-25 03:49:01.086323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.820 [2024-07-25 03:49:01.117394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.078 [2024-07-25 03:49:01.211399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.078 [2024-07-25 03:49:01.211453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.078 [2024-07-25 03:49:01.211570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.078 [2024-07-25 03:49:01.211573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.011 Running I/O for 1 seconds... 00:05:47.011 lcore 0: 236771 00:05:47.011 lcore 1: 236771 00:05:47.011 lcore 2: 236771 00:05:47.011 lcore 3: 236770 00:05:47.011 done. 00:05:47.011 00:05:47.011 real 0m1.253s 00:05:47.011 user 0m4.168s 00:05:47.011 sys 0m0.080s 00:05:47.011 03:49:02 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.011 03:49:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.011 ************************************ 00:05:47.011 END TEST event_perf 00:05:47.011 ************************************ 00:05:47.269 03:49:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.269 03:49:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:47.269 03:49:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.269 03:49:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.269 ************************************ 00:05:47.269 START TEST event_reactor 00:05:47.269 ************************************ 00:05:47.269 03:49:02 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.269 [2024-07-25 03:49:02.359893] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:47.269 [2024-07-25 03:49:02.359959] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701993 ] 00:05:47.269 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.269 [2024-07-25 03:49:02.391622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.269 [2024-07-25 03:49:02.423642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.269 [2024-07-25 03:49:02.515637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.647 test_start 00:05:48.647 oneshot 00:05:48.647 tick 100 00:05:48.647 tick 100 00:05:48.647 tick 250 00:05:48.647 tick 100 00:05:48.647 tick 100 00:05:48.647 tick 100 00:05:48.647 tick 250 00:05:48.647 tick 500 00:05:48.647 tick 100 00:05:48.647 tick 100 00:05:48.647 tick 250 00:05:48.647 tick 100 00:05:48.647 tick 100 00:05:48.647 test_end 00:05:48.647 00:05:48.647 real 0m1.253s 00:05:48.647 user 0m1.163s 00:05:48.647 sys 0m0.085s 00:05:48.647 03:49:03 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.647 03:49:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:48.647 ************************************ 00:05:48.647 END TEST event_reactor 00:05:48.647 ************************************ 00:05:48.647 03:49:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.647 03:49:03 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:48.647 03:49:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.647 03:49:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.647 ************************************ 00:05:48.647 START TEST event_reactor_perf 00:05:48.647 ************************************ 00:05:48.647 03:49:03 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.647 [2024-07-25 03:49:03.659401] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:48.647 [2024-07-25 03:49:03.659463] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702145 ] 00:05:48.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.647 [2024-07-25 03:49:03.689934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.647 [2024-07-25 03:49:03.721747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.647 [2024-07-25 03:49:03.811771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.049 test_start 00:05:50.049 test_end 00:05:50.049 Performance: 353935 events per second 00:05:50.049 00:05:50.049 real 0m1.249s 00:05:50.049 user 0m1.170s 00:05:50.049 sys 0m0.074s 00:05:50.049 03:49:04 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.049 03:49:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 ************************************ 00:05:50.049 END TEST event_reactor_perf 00:05:50.049 ************************************ 00:05:50.049 03:49:04 event -- event/event.sh@49 -- # uname -s 00:05:50.049 03:49:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.049 03:49:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.049 03:49:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.049 03:49:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.049 03:49:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 ************************************ 00:05:50.049 START TEST event_scheduler 00:05:50.049 ************************************ 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.049 * Looking for test storage... 00:05:50.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.049 03:49:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.049 03:49:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=702327 00:05:50.049 03:49:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.049 03:49:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.049 03:49:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 702327 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 702327 ']' 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.049 03:49:04 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 [2024-07-25 03:49:05.041215] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:50.049 [2024-07-25 03:49:05.041307] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702327 ] 00:05:50.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.049 [2024-07-25 03:49:05.072342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.049 [2024-07-25 03:49:05.099623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.049 [2024-07-25 03:49:05.189653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.049 [2024-07-25 03:49:05.189731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.049 [2024-07-25 03:49:05.189787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.049 [2024-07-25 03:49:05.189790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:50.049 03:49:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 [2024-07-25 03:49:05.262665] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:50.049 [2024-07-25 03:49:05.262690] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:50.049 [2024-07-25 03:49:05.262721] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.049 [2024-07-25 03:49:05.262732] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.049 [2024-07-25 03:49:05.262741] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.049 03:49:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 [2024-07-25 03:49:05.347632] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.049 03:49:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.049 03:49:05 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.307 03:49:05 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 ************************************ 00:05:50.307 START TEST scheduler_create_thread 00:05:50.307 ************************************ 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 2 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 3 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 4 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 5 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 6 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 7 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 8 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.307 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 9 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.308 10 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.308 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.872 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.872 00:05:50.872 real 0m0.587s 00:05:50.872 user 0m0.009s 00:05:50.872 sys 0m0.003s 00:05:50.872 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.872 03:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.872 ************************************ 00:05:50.872 END TEST scheduler_create_thread 00:05:50.872 ************************************ 00:05:50.872 03:49:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.872 03:49:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 702327 00:05:50.872 03:49:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 702327 ']' 00:05:50.872 03:49:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 702327 00:05:50.872 03:49:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:50.872 03:49:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.872 03:49:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 702327 00:05:50.872 03:49:06 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:50.872 03:49:06 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:50.872 03:49:06 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 702327' 00:05:50.872 killing process with pid 702327 00:05:50.872 03:49:06 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 702327 00:05:50.872 03:49:06 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 702327 00:05:51.439 [2024-07-25 03:49:06.443824] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:51.439 00:05:51.439 real 0m1.719s 00:05:51.439 user 0m2.275s 00:05:51.439 sys 0m0.319s 00:05:51.439 03:49:06 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.439 03:49:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.440 ************************************ 00:05:51.440 END TEST event_scheduler 00:05:51.440 ************************************ 00:05:51.440 03:49:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.440 03:49:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.440 03:49:06 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.440 03:49:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.440 03:49:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.440 ************************************ 00:05:51.440 START TEST app_repeat 00:05:51.440 ************************************ 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=702638 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 702638' 00:05:51.440 Process app_repeat pid: 702638 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.440 spdk_app_start Round 0 00:05:51.440 03:49:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 702638 /var/tmp/spdk-nbd.sock 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 702638 ']' 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.440 03:49:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.697 [2024-07-25 03:49:06.746146] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:51.697 [2024-07-25 03:49:06.746221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702638 ] 00:05:51.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.697 [2024-07-25 03:49:06.777963] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.697 [2024-07-25 03:49:06.809742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.697 [2024-07-25 03:49:06.903214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.697 [2024-07-25 03:49:06.903219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.955 03:49:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.955 03:49:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.955 03:49:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.955 Malloc0 00:05:52.213 03:49:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.470 Malloc1 00:05:52.470 03:49:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.470 03:49:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.728 /dev/nbd0 00:05:52.728 03:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.728 03:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.728 1+0 records in 00:05:52.728 1+0 records out 00:05:52.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198964 s, 20.6 MB/s 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.728 03:49:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.728 03:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.728 03:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.728 03:49:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.985 /dev/nbd1 00:05:52.985 03:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.986 1+0 records in 00:05:52.986 1+0 records out 00:05:52.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200885 s, 20.4 MB/s 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.986 03:49:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.986 03:49:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.244 { 00:05:53.244 "nbd_device": "/dev/nbd0", 00:05:53.244 "bdev_name": "Malloc0" 00:05:53.244 }, 00:05:53.244 { 00:05:53.244 "nbd_device": "/dev/nbd1", 00:05:53.244 "bdev_name": "Malloc1" 00:05:53.244 } 00:05:53.244 ]' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.244 { 00:05:53.244 "nbd_device": "/dev/nbd0", 00:05:53.244 "bdev_name": "Malloc0" 00:05:53.244 }, 00:05:53.244 { 00:05:53.244 "nbd_device": "/dev/nbd1", 00:05:53.244 "bdev_name": "Malloc1" 00:05:53.244 } 00:05:53.244 ]' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.244 /dev/nbd1' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.244 /dev/nbd1' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.244 256+0 records in 00:05:53.244 256+0 records out 00:05:53.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496772 s, 211 MB/s 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.244 256+0 records in 00:05:53.244 256+0 records out 00:05:53.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244207 s, 42.9 MB/s 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.244 256+0 records in 00:05:53.244 256+0 records out 00:05:53.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258059 s, 40.6 MB/s 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.244 03:49:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.502 03:49:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.759 03:49:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.016 03:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.273 03:49:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.273 03:49:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.530 03:49:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.530 [2024-07-25 03:49:09.796765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.788 [2024-07-25 03:49:09.886946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.788 [2024-07-25 03:49:09.886950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.788 [2024-07-25 03:49:09.948391] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.788 [2024-07-25 03:49:09.948463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.313 03:49:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.313 03:49:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.313 spdk_app_start Round 1 00:05:57.313 03:49:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 702638 /var/tmp/spdk-nbd.sock 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 702638 ']' 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.313 03:49:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.571 03:49:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.571 03:49:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.571 03:49:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.829 Malloc0 00:05:57.829 03:49:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.086 Malloc1 00:05:58.086 03:49:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.086 03:49:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.087 03:49:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.344 /dev/nbd0 00:05:58.344 03:49:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.344 03:49:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.344 1+0 records in 00:05:58.344 1+0 records out 00:05:58.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015709 s, 26.1 MB/s 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.344 03:49:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.344 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.344 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.344 03:49:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.601 /dev/nbd1 00:05:58.601 03:49:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.601 03:49:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.601 03:49:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.602 1+0 records in 00:05:58.602 1+0 records out 00:05:58.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215089 s, 19.0 MB/s 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.602 03:49:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.602 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.602 03:49:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.602 03:49:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.602 03:49:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.602 03:49:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.859 03:49:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.859 { 00:05:58.859 "nbd_device": "/dev/nbd0", 00:05:58.859 "bdev_name": "Malloc0" 00:05:58.859 }, 00:05:58.859 { 00:05:58.859 "nbd_device": "/dev/nbd1", 00:05:58.859 "bdev_name": "Malloc1" 00:05:58.859 } 00:05:58.859 ]' 00:05:58.859 03:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.859 { 00:05:58.859 "nbd_device": "/dev/nbd0", 00:05:58.859 "bdev_name": "Malloc0" 00:05:58.859 }, 00:05:58.859 { 00:05:58.859 "nbd_device": "/dev/nbd1", 00:05:58.859 "bdev_name": "Malloc1" 00:05:58.859 } 00:05:58.859 ]' 00:05:58.859 03:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.117 /dev/nbd1' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.117 /dev/nbd1' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.117 256+0 records in 00:05:59.117 256+0 records out 00:05:59.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405902 s, 258 MB/s 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.117 256+0 records in 00:05:59.117 256+0 records out 00:05:59.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246248 s, 42.6 MB/s 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.117 256+0 records in 00:05:59.117 256+0 records out 00:05:59.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254908 s, 41.1 MB/s 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.117 03:49:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.118 03:49:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.375 03:49:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.633 03:49:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.891 03:49:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.891 03:49:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.149 03:49:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.407 [2024-07-25 03:49:15.594653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.407 [2024-07-25 03:49:15.684999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.407 [2024-07-25 03:49:15.685003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.665 [2024-07-25 03:49:15.747681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.665 [2024-07-25 03:49:15.747761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.192 03:49:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.192 03:49:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.192 spdk_app_start Round 2 00:06:03.192 03:49:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 702638 /var/tmp/spdk-nbd.sock 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 702638 ']' 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.192 03:49:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.450 03:49:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.450 03:49:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.450 03:49:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.707 Malloc0 00:06:03.707 03:49:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.965 Malloc1 00:06:03.965 03:49:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.965 03:49:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.223 /dev/nbd0 00:06:04.223 03:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.223 03:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.223 1+0 records in 00:06:04.223 1+0 records out 00:06:04.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018576 s, 22.0 MB/s 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.223 03:49:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.223 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.223 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.223 03:49:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.481 /dev/nbd1 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.481 1+0 records in 00:06:04.481 1+0 records out 00:06:04.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189848 s, 21.6 MB/s 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.481 03:49:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.481 03:49:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.739 { 00:06:04.739 "nbd_device": "/dev/nbd0", 00:06:04.739 "bdev_name": "Malloc0" 00:06:04.739 }, 00:06:04.739 { 00:06:04.739 "nbd_device": "/dev/nbd1", 00:06:04.739 "bdev_name": "Malloc1" 00:06:04.739 } 00:06:04.739 ]' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.739 { 00:06:04.739 "nbd_device": "/dev/nbd0", 00:06:04.739 "bdev_name": "Malloc0" 00:06:04.739 }, 00:06:04.739 { 00:06:04.739 "nbd_device": "/dev/nbd1", 00:06:04.739 "bdev_name": "Malloc1" 00:06:04.739 } 00:06:04.739 ]' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.739 /dev/nbd1' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.739 /dev/nbd1' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.739 256+0 records in 00:06:04.739 256+0 records out 00:06:04.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508082 s, 206 MB/s 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.739 03:49:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.739 256+0 records in 00:06:04.739 256+0 records out 00:06:04.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231703 s, 45.3 MB/s 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.739 256+0 records in 00:06:04.739 256+0 records out 00:06:04.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282928 s, 37.1 MB/s 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.739 03:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.997 03:49:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.255 03:49:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.546 03:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.811 03:49:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.811 03:49:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.069 03:49:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.326 [2024-07-25 03:49:21.385512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.326 [2024-07-25 03:49:21.474671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.326 [2024-07-25 03:49:21.474674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.326 [2024-07-25 03:49:21.536807] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.326 [2024-07-25 03:49:21.536875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.609 03:49:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 702638 /var/tmp/spdk-nbd.sock 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 702638 ']' 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:09.609 03:49:24 event.app_repeat -- event/event.sh@39 -- # killprocess 702638 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 702638 ']' 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 702638 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 702638 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 702638' 00:06:09.609 killing process with pid 702638 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@969 -- # kill 702638 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@974 -- # wait 702638 00:06:09.609 spdk_app_start is called in Round 0. 00:06:09.609 Shutdown signal received, stop current app iteration 00:06:09.609 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:09.609 spdk_app_start is called in Round 1. 00:06:09.609 Shutdown signal received, stop current app iteration 00:06:09.609 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:09.609 spdk_app_start is called in Round 2. 00:06:09.609 Shutdown signal received, stop current app iteration 00:06:09.609 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:09.609 spdk_app_start is called in Round 3. 00:06:09.609 Shutdown signal received, stop current app iteration 00:06:09.609 03:49:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.609 03:49:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.609 00:06:09.609 real 0m17.938s 00:06:09.609 user 0m38.976s 00:06:09.609 sys 0m3.285s 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.609 03:49:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.609 ************************************ 00:06:09.609 END TEST app_repeat 00:06:09.609 ************************************ 00:06:09.609 03:49:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.609 03:49:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.609 03:49:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.609 03:49:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.609 03:49:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.609 ************************************ 00:06:09.609 START TEST cpu_locks 00:06:09.609 ************************************ 00:06:09.609 03:49:24 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.609 * Looking for test storage... 00:06:09.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.609 03:49:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.609 03:49:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.609 03:49:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.609 03:49:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.609 03:49:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.609 03:49:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.609 03:49:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.609 ************************************ 00:06:09.609 START TEST default_locks 00:06:09.609 ************************************ 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=704989 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 704989 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 704989 ']' 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.609 03:49:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.609 [2024-07-25 03:49:24.841110] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:09.609 [2024-07-25 03:49:24.841203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704989 ] 00:06:09.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.609 [2024-07-25 03:49:24.873177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.609 [2024-07-25 03:49:24.899211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.867 [2024-07-25 03:49:24.984099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.125 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.125 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:10.125 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 704989 00:06:10.125 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 704989 00:06:10.125 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.383 lslocks: write error 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 704989 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 704989 ']' 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 704989 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 704989 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 704989' 00:06:10.383 killing process with pid 704989 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 704989 00:06:10.383 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 704989 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 704989 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 704989 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 704989 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 704989 ']' 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (704989) - No such process 00:06:10.949 ERROR: process (pid: 704989) is no longer running 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.949 00:06:10.949 real 0m1.205s 00:06:10.949 user 0m1.143s 00:06:10.949 sys 0m0.522s 00:06:10.949 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.950 03:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.950 ************************************ 00:06:10.950 END TEST default_locks 00:06:10.950 ************************************ 00:06:10.950 03:49:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.950 03:49:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.950 03:49:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.950 03:49:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.950 ************************************ 00:06:10.950 START TEST default_locks_via_rpc 00:06:10.950 ************************************ 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=705153 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 705153 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 705153 ']' 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.950 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.950 [2024-07-25 03:49:26.094935] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:10.950 [2024-07-25 03:49:26.095039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705153 ] 00:06:10.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.950 [2024-07-25 03:49:26.127558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.950 [2024-07-25 03:49:26.157977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.208 [2024-07-25 03:49:26.254011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.208 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.208 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.208 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.208 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.208 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 705153 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 705153 00:06:11.466 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 705153 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 705153 ']' 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 705153 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705153 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705153' 00:06:11.724 killing process with pid 705153 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 705153 00:06:11.724 03:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 705153 00:06:11.982 00:06:11.982 real 0m1.182s 00:06:11.982 user 0m1.143s 00:06:11.982 sys 0m0.526s 00:06:11.982 03:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.982 03:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.982 ************************************ 00:06:11.982 END TEST default_locks_via_rpc 00:06:11.982 ************************************ 00:06:11.982 03:49:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.982 03:49:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.982 03:49:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.982 03:49:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.982 ************************************ 00:06:11.982 START TEST non_locking_app_on_locked_coremask 00:06:11.982 ************************************ 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=705313 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 705313 /var/tmp/spdk.sock 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 705313 ']' 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.982 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.240 [2024-07-25 03:49:27.323621] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:12.240 [2024-07-25 03:49:27.323725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705313 ] 00:06:12.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.240 [2024-07-25 03:49:27.356459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.240 [2024-07-25 03:49:27.382567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.240 [2024-07-25 03:49:27.469971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=705446 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 705446 /var/tmp/spdk2.sock 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 705446 ']' 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.497 03:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.497 [2024-07-25 03:49:27.763724] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:12.498 [2024-07-25 03:49:27.763816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705446 ] 00:06:12.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.755 [2024-07-25 03:49:27.798996] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.755 [2024-07-25 03:49:27.856853] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.755 [2024-07-25 03:49:27.856882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.755 [2024-07-25 03:49:28.041132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.686 03:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.686 03:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.686 03:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 705313 00:06:13.686 03:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 705313 00:06:13.686 03:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.944 lslocks: write error 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 705313 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 705313 ']' 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 705313 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705313 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705313' 00:06:13.944 killing process with pid 705313 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 705313 00:06:13.944 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 705313 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 705446 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 705446 ']' 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 705446 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705446 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705446' 00:06:14.875 killing process with pid 705446 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 705446 00:06:14.875 03:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 705446 00:06:15.133 00:06:15.133 real 0m3.051s 00:06:15.133 user 0m3.173s 00:06:15.133 sys 0m1.036s 00:06:15.133 03:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.133 03:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 ************************************ 00:06:15.133 END TEST non_locking_app_on_locked_coremask 00:06:15.133 ************************************ 00:06:15.133 03:49:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.133 03:49:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.133 03:49:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.133 03:49:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 ************************************ 00:06:15.133 START TEST locking_app_on_unlocked_coremask 00:06:15.133 ************************************ 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=705747 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 705747 /var/tmp/spdk.sock 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 705747 ']' 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.133 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 [2024-07-25 03:49:30.417396] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:15.133 [2024-07-25 03:49:30.417497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705747 ] 00:06:15.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.390 [2024-07-25 03:49:30.450769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.390 [2024-07-25 03:49:30.476662] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.390 [2024-07-25 03:49:30.476700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.390 [2024-07-25 03:49:30.564710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=705756 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 705756 /var/tmp/spdk2.sock 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 705756 ']' 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.648 03:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.648 [2024-07-25 03:49:30.864911] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:15.648 [2024-07-25 03:49:30.865002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705756 ] 00:06:15.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.648 [2024-07-25 03:49:30.898954] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.906 [2024-07-25 03:49:30.955716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.906 [2024-07-25 03:49:31.146014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.839 03:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.839 03:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.839 03:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 705756 00:06:16.839 03:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 705756 00:06:16.839 03:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.097 lslocks: write error 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 705747 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 705747 ']' 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 705747 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705747 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705747' 00:06:17.097 killing process with pid 705747 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 705747 00:06:17.097 03:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 705747 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 705756 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 705756 ']' 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 705756 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705756 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705756' 00:06:18.030 killing process with pid 705756 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 705756 00:06:18.030 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 705756 00:06:18.287 00:06:18.287 real 0m3.144s 00:06:18.287 user 0m3.285s 00:06:18.287 sys 0m1.045s 00:06:18.287 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.287 03:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.287 ************************************ 00:06:18.287 END TEST locking_app_on_unlocked_coremask 00:06:18.287 ************************************ 00:06:18.288 03:49:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.288 03:49:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.288 03:49:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.288 03:49:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.288 ************************************ 00:06:18.288 START TEST locking_app_on_locked_coremask 00:06:18.288 ************************************ 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=706181 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 706181 /var/tmp/spdk.sock 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 706181 ']' 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.288 03:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.545 [2024-07-25 03:49:33.617693] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:18.545 [2024-07-25 03:49:33.617792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706181 ] 00:06:18.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.545 [2024-07-25 03:49:33.654698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.545 [2024-07-25 03:49:33.684795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.545 [2024-07-25 03:49:33.779996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=706190 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 706190 /var/tmp/spdk2.sock 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 706190 /var/tmp/spdk2.sock 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 706190 /var/tmp/spdk2.sock 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 706190 ']' 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.803 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.803 [2024-07-25 03:49:34.084960] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:18.803 [2024-07-25 03:49:34.085051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706190 ] 00:06:19.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.061 [2024-07-25 03:49:34.120959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.061 [2024-07-25 03:49:34.177186] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 706181 has claimed it. 00:06:19.061 [2024-07-25 03:49:34.177233] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (706190) - No such process 00:06:19.626 ERROR: process (pid: 706190) is no longer running 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 706181 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 706181 00:06:19.626 03:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.191 lslocks: write error 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 706181 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 706181 ']' 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 706181 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706181 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706181' 00:06:20.191 killing process with pid 706181 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 706181 00:06:20.191 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 706181 00:06:20.448 00:06:20.448 real 0m2.068s 00:06:20.448 user 0m2.246s 00:06:20.448 sys 0m0.668s 00:06:20.448 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.448 03:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 END TEST locking_app_on_locked_coremask 00:06:20.448 ************************************ 00:06:20.448 03:49:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.448 03:49:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.448 03:49:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.448 03:49:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 START TEST locking_overlapped_coremask 00:06:20.448 ************************************ 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=706479 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 706479 /var/tmp/spdk.sock 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 706479 ']' 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.448 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.449 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.449 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.449 03:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 [2024-07-25 03:49:35.729058] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:20.449 [2024-07-25 03:49:35.729163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706479 ] 00:06:20.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.707 [2024-07-25 03:49:35.763091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.707 [2024-07-25 03:49:35.789292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.707 [2024-07-25 03:49:35.878660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.707 [2024-07-25 03:49:35.878723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.707 [2024-07-25 03:49:35.878725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=706490 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 706490 /var/tmp/spdk2.sock 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 706490 /var/tmp/spdk2.sock 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 706490 /var/tmp/spdk2.sock 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 706490 ']' 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.964 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.964 [2024-07-25 03:49:36.188461] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:20.964 [2024-07-25 03:49:36.188554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706490 ] 00:06:20.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.964 [2024-07-25 03:49:36.223280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.221 [2024-07-25 03:49:36.278270] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 706479 has claimed it. 00:06:21.221 [2024-07-25 03:49:36.278325] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (706490) - No such process 00:06:21.817 ERROR: process (pid: 706490) is no longer running 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 706479 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 706479 ']' 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 706479 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706479 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706479' 00:06:21.817 killing process with pid 706479 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 706479 00:06:21.817 03:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 706479 00:06:22.075 00:06:22.075 real 0m1.640s 00:06:22.075 user 0m4.438s 00:06:22.075 sys 0m0.442s 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.075 ************************************ 00:06:22.075 END TEST locking_overlapped_coremask 00:06:22.075 ************************************ 00:06:22.075 03:49:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.075 03:49:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.075 03:49:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.075 03:49:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.075 ************************************ 00:06:22.075 START TEST locking_overlapped_coremask_via_rpc 00:06:22.075 ************************************ 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=706656 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 706656 /var/tmp/spdk.sock 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 706656 ']' 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.075 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.332 [2024-07-25 03:49:37.419812] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:22.332 [2024-07-25 03:49:37.419903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706656 ] 00:06:22.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.332 [2024-07-25 03:49:37.450531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.332 [2024-07-25 03:49:37.481777] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.332 [2024-07-25 03:49:37.481807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.332 [2024-07-25 03:49:37.572286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.333 [2024-07-25 03:49:37.572341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.333 [2024-07-25 03:49:37.572359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.589 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.589 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=706786 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 706786 /var/tmp/spdk2.sock 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 706786 ']' 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.590 03:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 [2024-07-25 03:49:37.869685] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:22.590 [2024-07-25 03:49:37.869774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706786 ] 00:06:22.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.847 [2024-07-25 03:49:37.905330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.847 [2024-07-25 03:49:37.960015] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.847 [2024-07-25 03:49:37.960039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.847 [2024-07-25 03:49:38.135552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.847 [2024-07-25 03:49:38.135607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.847 [2024-07-25 03:49:38.135609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 [2024-07-25 03:49:38.824349] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 706656 has claimed it. 00:06:23.780 request: 00:06:23.780 { 00:06:23.780 "method": "framework_enable_cpumask_locks", 00:06:23.780 "req_id": 1 00:06:23.780 } 00:06:23.780 Got JSON-RPC error response 00:06:23.780 response: 00:06:23.780 { 00:06:23.780 "code": -32603, 00:06:23.780 "message": "Failed to claim CPU core: 2" 00:06:23.780 } 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 706656 /var/tmp/spdk.sock 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 706656 ']' 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.780 03:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.037 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 706786 /var/tmp/spdk2.sock 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 706786 ']' 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.038 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.295 00:06:24.295 real 0m1.972s 00:06:24.295 user 0m1.014s 00:06:24.295 sys 0m0.185s 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.295 03:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.295 ************************************ 00:06:24.295 END TEST locking_overlapped_coremask_via_rpc 00:06:24.295 ************************************ 00:06:24.295 03:49:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.296 03:49:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 706656 ]] 00:06:24.296 03:49:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 706656 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 706656 ']' 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 706656 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706656 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706656' 00:06:24.296 killing process with pid 706656 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 706656 00:06:24.296 03:49:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 706656 00:06:24.553 03:49:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 706786 ]] 00:06:24.553 03:49:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 706786 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 706786 ']' 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 706786 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706786 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706786' 00:06:24.553 killing process with pid 706786 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 706786 00:06:24.553 03:49:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 706786 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 706656 ]] 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 706656 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 706656 ']' 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 706656 00:06:25.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (706656) - No such process 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 706656 is not found' 00:06:25.119 Process with pid 706656 is not found 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 706786 ]] 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 706786 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 706786 ']' 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 706786 00:06:25.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (706786) - No such process 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 706786 is not found' 00:06:25.119 Process with pid 706786 is not found 00:06:25.119 03:49:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.119 00:06:25.119 real 0m15.508s 00:06:25.119 user 0m27.204s 00:06:25.119 sys 0m5.337s 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.119 03:49:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.119 ************************************ 00:06:25.119 END TEST cpu_locks 00:06:25.119 ************************************ 00:06:25.119 00:06:25.119 real 0m39.277s 00:06:25.119 user 1m15.099s 00:06:25.119 sys 0m9.413s 00:06:25.119 03:49:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.119 03:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.119 ************************************ 00:06:25.119 END TEST event 00:06:25.119 ************************************ 00:06:25.119 03:49:40 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.119 03:49:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.119 03:49:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.119 03:49:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.119 ************************************ 00:06:25.119 START TEST thread 00:06:25.119 ************************************ 00:06:25.119 03:49:40 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.119 * Looking for test storage... 00:06:25.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:25.119 03:49:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.119 03:49:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:25.119 03:49:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.119 03:49:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.119 ************************************ 00:06:25.119 START TEST thread_poller_perf 00:06:25.119 ************************************ 00:06:25.119 03:49:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.119 [2024-07-25 03:49:40.366215] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:25.119 [2024-07-25 03:49:40.366390] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707156 ] 00:06:25.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.119 [2024-07-25 03:49:40.398480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.377 [2024-07-25 03:49:40.428384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.377 [2024-07-25 03:49:40.517661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.377 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.311 ====================================== 00:06:26.311 busy:2707795141 (cyc) 00:06:26.311 total_run_count: 292000 00:06:26.311 tsc_hz: 2700000000 (cyc) 00:06:26.311 ====================================== 00:06:26.311 poller_cost: 9273 (cyc), 3434 (nsec) 00:06:26.311 00:06:26.311 real 0m1.255s 00:06:26.311 user 0m1.172s 00:06:26.311 sys 0m0.077s 00:06:26.311 03:49:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.311 03:49:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.311 ************************************ 00:06:26.311 END TEST thread_poller_perf 00:06:26.311 ************************************ 00:06:26.569 03:49:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.569 03:49:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.569 03:49:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.569 03:49:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.569 ************************************ 00:06:26.569 START TEST thread_poller_perf 00:06:26.570 ************************************ 00:06:26.570 03:49:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.570 [2024-07-25 03:49:41.673914] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:26.570 [2024-07-25 03:49:41.673983] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707308 ] 00:06:26.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.570 [2024-07-25 03:49:41.705731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.570 [2024-07-25 03:49:41.737835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.570 [2024-07-25 03:49:41.828704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.570 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.943 ====================================== 00:06:27.943 busy:2703001693 (cyc) 00:06:27.943 total_run_count: 3954000 00:06:27.943 tsc_hz: 2700000000 (cyc) 00:06:27.943 ====================================== 00:06:27.943 poller_cost: 683 (cyc), 252 (nsec) 00:06:27.943 00:06:27.943 real 0m1.253s 00:06:27.943 user 0m1.164s 00:06:27.943 sys 0m0.084s 00:06:27.943 03:49:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.943 03:49:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 ************************************ 00:06:27.943 END TEST thread_poller_perf 00:06:27.943 ************************************ 00:06:27.943 03:49:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.943 00:06:27.943 real 0m2.655s 00:06:27.943 user 0m2.393s 00:06:27.943 sys 0m0.261s 00:06:27.943 03:49:42 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.943 03:49:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 ************************************ 00:06:27.943 END TEST thread 00:06:27.943 ************************************ 00:06:27.943 03:49:42 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:27.943 03:49:42 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.943 03:49:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.943 03:49:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.943 03:49:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 ************************************ 00:06:27.943 START TEST app_cmdline 00:06:27.943 ************************************ 00:06:27.943 03:49:42 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.943 * Looking for test storage... 00:06:27.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:27.943 03:49:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.943 03:49:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=707505 00:06:27.943 03:49:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.943 03:49:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 707505 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 707505 ']' 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.943 03:49:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 [2024-07-25 03:49:43.088299] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:27.943 [2024-07-25 03:49:43.088378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707505 ] 00:06:27.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.943 [2024-07-25 03:49:43.118937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.944 [2024-07-25 03:49:43.145933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.944 [2024-07-25 03:49:43.229098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.202 03:49:43 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.202 03:49:43 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:28.202 03:49:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:28.460 { 00:06:28.460 "version": "SPDK v24.09-pre git sha1 d005e023b", 00:06:28.460 "fields": { 00:06:28.460 "major": 24, 00:06:28.460 "minor": 9, 00:06:28.460 "patch": 0, 00:06:28.460 "suffix": "-pre", 00:06:28.460 "commit": "d005e023b" 00:06:28.460 } 00:06:28.460 } 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.460 03:49:43 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.460 03:49:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.460 03:49:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.460 03:49:43 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.718 03:49:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.718 03:49:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.718 03:49:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.718 03:49:43 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.718 request: 00:06:28.718 { 00:06:28.718 "method": "env_dpdk_get_mem_stats", 00:06:28.718 "req_id": 1 00:06:28.718 } 00:06:28.718 Got JSON-RPC error response 00:06:28.718 response: 00:06:28.718 { 00:06:28.718 "code": -32601, 00:06:28.718 "message": "Method not found" 00:06:28.718 } 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.718 03:49:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 707505 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 707505 ']' 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 707505 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:28.718 03:49:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 707505 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 707505' 00:06:28.976 killing process with pid 707505 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 707505 00:06:28.976 03:49:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 707505 00:06:29.234 00:06:29.234 real 0m1.477s 00:06:29.234 user 0m1.798s 00:06:29.234 sys 0m0.454s 00:06:29.234 03:49:44 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.234 03:49:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.234 ************************************ 00:06:29.234 END TEST app_cmdline 00:06:29.234 ************************************ 00:06:29.234 03:49:44 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:29.234 03:49:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.234 03:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.234 03:49:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.234 ************************************ 00:06:29.234 START TEST version 00:06:29.235 ************************************ 00:06:29.235 03:49:44 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:29.493 * Looking for test storage... 00:06:29.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:29.493 03:49:44 version -- app/version.sh@17 -- # get_header_version major 00:06:29.493 03:49:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.493 03:49:44 version -- app/version.sh@17 -- # major=24 00:06:29.493 03:49:44 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.493 03:49:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.493 03:49:44 version -- app/version.sh@18 -- # minor=9 00:06:29.493 03:49:44 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.493 03:49:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.493 03:49:44 version -- app/version.sh@19 -- # patch=0 00:06:29.493 03:49:44 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.493 03:49:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.493 03:49:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.493 03:49:44 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.493 03:49:44 version -- app/version.sh@22 -- # version=24.9 00:06:29.493 03:49:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.493 03:49:44 version -- app/version.sh@28 -- # version=24.9rc0 00:06:29.493 03:49:44 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:29.493 03:49:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.493 03:49:44 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:29.493 03:49:44 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:29.493 00:06:29.493 real 0m0.112s 00:06:29.493 user 0m0.053s 00:06:29.493 sys 0m0.080s 00:06:29.493 03:49:44 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.493 03:49:44 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.493 ************************************ 00:06:29.493 END TEST version 00:06:29.493 ************************************ 00:06:29.493 03:49:44 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@202 -- # uname -s 00:06:29.493 03:49:44 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:29.493 03:49:44 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:29.493 03:49:44 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:29.493 03:49:44 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:29.493 03:49:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.493 03:49:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.493 03:49:44 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:29.493 03:49:44 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:29.493 03:49:44 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:29.493 03:49:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.493 03:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.493 03:49:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.493 ************************************ 00:06:29.493 START TEST nvmf_tcp 00:06:29.493 ************************************ 00:06:29.493 03:49:44 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:29.493 * Looking for test storage... 00:06:29.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:29.493 03:49:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:29.493 03:49:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:29.493 03:49:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:29.493 03:49:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.493 03:49:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.493 03:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.493 ************************************ 00:06:29.493 START TEST nvmf_target_core 00:06:29.493 ************************************ 00:06:29.493 03:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:29.752 * Looking for test storage... 00:06:29.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.752 03:49:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.753 ************************************ 00:06:29.753 START TEST nvmf_abort 00:06:29.753 ************************************ 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:29.753 * Looking for test storage... 00:06:29.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:29.753 03:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:31.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:31.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:31.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:31.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:31.653 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.911 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.912 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.912 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:31.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:06:31.912 00:06:31.912 --- 10.0.0.2 ping statistics --- 00:06:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.912 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:31.912 03:49:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:06:31.912 00:06:31.912 --- 10.0.0.1 ping statistics --- 00:06:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.912 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=709554 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 709554 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 709554 ']' 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.912 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.912 [2024-07-25 03:49:47.075341] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:31.912 [2024-07-25 03:49:47.075413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.912 [2024-07-25 03:49:47.112945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.912 [2024-07-25 03:49:47.145269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.170 [2024-07-25 03:49:47.237759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.170 [2024-07-25 03:49:47.237817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.170 [2024-07-25 03:49:47.237833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.170 [2024-07-25 03:49:47.237847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.170 [2024-07-25 03:49:47.237859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.170 [2024-07-25 03:49:47.237951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.170 [2024-07-25 03:49:47.238079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.170 [2024-07-25 03:49:47.238082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 [2024-07-25 03:49:47.383994] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 Malloc0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 Delay0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 [2024-07-25 03:49:47.451036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.170 03:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:32.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.428 [2024-07-25 03:49:47.597444] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:34.955 Initializing NVMe Controllers 00:06:34.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.955 controller IO queue size 128 less than required 00:06:34.955 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:34.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:34.955 Initialization complete. Launching workers. 00:06:34.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32975 00:06:34.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33036, failed to submit 62 00:06:34.955 success 32979, unsuccess 57, failed 0 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:34.955 rmmod nvme_tcp 00:06:34.955 rmmod nvme_fabrics 00:06:34.955 rmmod nvme_keyring 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 709554 ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 709554 ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709554' 00:06:34.955 killing process with pid 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 709554 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.955 03:49:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.856 00:06:36.856 real 0m7.207s 00:06:36.856 user 0m10.474s 00:06:36.856 sys 0m2.476s 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 ************************************ 00:06:36.856 END TEST nvmf_abort 00:06:36.856 ************************************ 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 ************************************ 00:06:36.856 START TEST nvmf_ns_hotplug_stress 00:06:36.856 ************************************ 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:36.856 * Looking for test storage... 00:06:36.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.856 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:37.114 03:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:39.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:39.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:39.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:39.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:39.028 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:39.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:06:39.029 00:06:39.029 --- 10.0.0.2 ping statistics --- 00:06:39.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.029 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:39.029 00:06:39.029 --- 10.0.0.1 ping statistics --- 00:06:39.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.029 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=711779 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 711779 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 711779 ']' 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.029 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.029 [2024-07-25 03:49:54.303473] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:39.029 [2024-07-25 03:49:54.303574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.300 [2024-07-25 03:49:54.346119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.300 [2024-07-25 03:49:54.373339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.300 [2024-07-25 03:49:54.461165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.300 [2024-07-25 03:49:54.461226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.300 [2024-07-25 03:49:54.461239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.300 [2024-07-25 03:49:54.461259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.300 [2024-07-25 03:49:54.461269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.300 [2024-07-25 03:49:54.461404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.300 [2024-07-25 03:49:54.461330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.300 [2024-07-25 03:49:54.461400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:39.300 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.557 [2024-07-25 03:49:54.840637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.814 03:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:40.071 03:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.327 [2024-07-25 03:49:55.389224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.327 03:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.585 03:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:40.842 Malloc0 00:06:40.842 03:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:41.100 Delay0 00:06:41.100 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.100 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:41.358 NULL1 00:06:41.358 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:41.616 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=712076 00:06:41.616 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:41.616 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:41.616 03:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.806 Read completed with error (sct=0, sc=11) 00:06:42.806 03:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.070 03:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:43.070 03:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:43.327 true 00:06:43.327 03:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:43.327 03:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.259 03:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.516 03:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:44.516 03:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:44.774 true 00:06:44.774 03:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:44.774 03:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.031 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.288 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:45.288 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:45.288 true 00:06:45.546 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:45.546 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.803 03:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.060 03:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:46.060 03:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:46.060 true 00:06:46.060 03:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:46.060 03:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.430 03:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.430 03:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:47.430 03:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:47.688 true 00:06:47.688 03:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:47.688 03:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.945 03:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.203 03:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:48.203 03:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:48.459 true 00:06:48.459 03:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:48.459 03:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.391 03:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.649 03:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:49.649 03:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:49.906 true 00:06:49.906 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:49.906 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.164 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.421 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:50.421 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:50.679 true 00:06:50.679 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:50.679 03:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.611 03:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.869 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:51.869 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:52.125 true 00:06:52.125 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:52.125 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.383 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.640 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:52.640 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:52.897 true 00:06:52.897 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:52.897 03:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.841 03:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.123 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:54.123 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:54.123 true 00:06:54.123 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:54.123 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.381 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.639 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:54.639 03:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:54.895 true 00:06:54.895 03:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:54.895 03:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.827 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.085 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:56.085 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:56.342 true 00:06:56.342 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:56.342 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.600 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.857 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:56.857 03:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:57.114 true 00:06:57.114 03:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:57.114 03:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.045 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.045 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:58.045 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:58.301 true 00:06:58.301 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:58.301 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.557 03:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.814 03:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:58.814 03:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:59.070 true 00:06:59.070 03:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:06:59.070 03:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.000 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.257 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:00.257 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:00.514 true 00:07:00.514 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:00.514 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.771 03:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.027 03:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:01.028 03:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:01.284 true 00:07:01.284 03:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:01.284 03:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.213 03:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.470 03:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:02.470 03:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:02.728 true 00:07:02.728 03:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:02.728 03:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.985 03:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.241 03:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:03.241 03:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:03.497 true 00:07:03.497 03:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:03.497 03:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.431 03:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.431 03:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:04.431 03:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:04.687 true 00:07:04.687 03:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:04.687 03:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.944 03:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.201 03:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:05.201 03:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:05.458 true 00:07:05.459 03:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:05.459 03:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.391 03:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.648 03:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:06.648 03:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:06.906 true 00:07:06.906 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:06.906 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.164 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.421 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:07.421 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:07.678 true 00:07:07.678 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:07.678 03:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.936 03:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.193 03:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:08.193 03:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:08.450 true 00:07:08.450 03:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:08.450 03:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.858 03:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.858 03:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:09.858 03:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:10.115 true 00:07:10.115 03:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:10.115 03:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.047 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.047 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:11.047 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:11.305 true 00:07:11.305 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:11.305 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.563 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.820 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:11.820 03:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:12.077 true 00:07:12.077 03:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:12.077 03:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.008 Initializing NVMe Controllers 00:07:13.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.008 Controller IO queue size 128, less than required. 00:07:13.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.008 Controller IO queue size 128, less than required. 00:07:13.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:13.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:13.008 Initialization complete. Launching workers. 00:07:13.008 ======================================================== 00:07:13.008 Latency(us) 00:07:13.008 Device Information : IOPS MiB/s Average min max 00:07:13.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.20 0.47 73850.33 2363.96 1012629.98 00:07:13.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11409.63 5.57 11218.62 3570.98 449314.12 00:07:13.008 ======================================================== 00:07:13.008 Total : 12380.83 6.05 16131.69 2363.96 1012629.98 00:07:13.008 00:07:13.008 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.265 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:13.265 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:13.522 true 00:07:13.522 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 712076 00:07:13.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (712076) - No such process 00:07:13.522 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 712076 00:07:13.522 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.779 03:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.035 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:14.035 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:14.035 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:14.035 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.035 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:14.292 null0 00:07:14.292 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.292 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.292 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:14.292 null1 00:07:14.549 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.549 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.549 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:14.549 null2 00:07:14.549 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.806 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.806 03:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:14.806 null3 00:07:14.806 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.806 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.806 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:15.063 null4 00:07:15.063 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.063 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.063 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:15.319 null5 00:07:15.319 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.319 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.319 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:15.576 null6 00:07:15.576 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.576 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.576 03:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:15.833 null7 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.833 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 716272 716273 716275 716277 716279 716281 716283 716285 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.834 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.091 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.349 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.607 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.864 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.865 03:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.122 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.380 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.638 03:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.895 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.410 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.668 03:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.926 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.185 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.443 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.700 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.701 03:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.958 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.215 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.473 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.731 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.731 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.989 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.989 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.989 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.989 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.246 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.504 rmmod nvme_tcp 00:07:21.504 rmmod nvme_fabrics 00:07:21.504 rmmod nvme_keyring 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 711779 ']' 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 711779 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 711779 ']' 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 711779 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 711779 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 711779' 00:07:21.504 killing process with pid 711779 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 711779 00:07:21.504 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 711779 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.761 03:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.698 03:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.698 00:07:23.698 real 0m46.880s 00:07:23.698 user 3m33.393s 00:07:23.698 sys 0m16.377s 00:07:23.698 03:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.698 03:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.698 ************************************ 00:07:23.698 END TEST nvmf_ns_hotplug_stress 00:07:23.698 ************************************ 00:07:23.955 03:50:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:23.955 03:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.956 03:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.956 03:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.956 ************************************ 00:07:23.956 START TEST nvmf_delete_subsystem 00:07:23.956 ************************************ 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:23.956 * Looking for test storage... 00:07:23.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.956 03:50:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.854 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.855 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:07:26.114 00:07:26.114 --- 10.0.0.2 ping statistics --- 00:07:26.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.114 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:07:26.114 00:07:26.114 --- 10.0.0.1 ping statistics --- 00:07:26.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.114 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=719039 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 719039 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 719039 ']' 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.114 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.114 [2024-07-25 03:50:41.286761] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:26.114 [2024-07-25 03:50:41.286830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.114 [2024-07-25 03:50:41.322805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.114 [2024-07-25 03:50:41.353105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.372 [2024-07-25 03:50:41.444914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.372 [2024-07-25 03:50:41.444973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.372 [2024-07-25 03:50:41.444999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.372 [2024-07-25 03:50:41.445013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.372 [2024-07-25 03:50:41.445025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.372 [2024-07-25 03:50:41.445115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.372 [2024-07-25 03:50:41.445121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.372 [2024-07-25 03:50:41.588539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.372 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.373 [2024-07-25 03:50:41.604801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.373 NULL1 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.373 Delay0 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=719182 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:26.373 03:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:26.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.630 [2024-07-25 03:50:41.679462] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:28.528 03:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.528 03:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.528 03:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 starting I/O failed: -6 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 [2024-07-25 03:50:43.769118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f20 is same with the state(5) to be set 00:07:28.528 starting I/O failed: -6 00:07:28.528 Write completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.528 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 Write completed with error (sct=0, sc=8) 00:07:28.529 Read completed with error (sct=0, sc=8) 00:07:28.529 starting I/O failed: -6 00:07:28.529 starting I/O failed: -6 00:07:28.529 starting I/O failed: -6 00:07:28.529 starting I/O failed: -6 00:07:29.462 [2024-07-25 03:50:44.736918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d81b40 is same with the state(5) to be set 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 [2024-07-25 03:50:44.770960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64100 is same with the state(5) to be set 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Read completed with error (sct=0, sc=8) 00:07:29.718 Write completed with error (sct=0, sc=8) 00:07:29.718 [2024-07-25 03:50:44.771174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63d40 is same with the state(5) to be set 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 [2024-07-25 03:50:44.771631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa5800d000 is same with the state(5) to be set 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 Write completed with error (sct=0, sc=8) 00:07:29.719 Read completed with error (sct=0, sc=8) 00:07:29.719 [2024-07-25 03:50:44.772422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa5800d660 is same with the state(5) to be set 00:07:29.719 Initializing NVMe Controllers 00:07:29.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.719 Controller IO queue size 128, less than required. 00:07:29.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:29.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:29.719 Initialization complete. Launching workers. 00:07:29.719 ======================================================== 00:07:29.719 Latency(us) 00:07:29.719 Device Information : IOPS MiB/s Average min max 00:07:29.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.24 0.08 894709.10 586.94 1012117.24 00:07:29.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 189.10 0.09 897460.41 699.09 1012300.38 00:07:29.719 ======================================================== 00:07:29.719 Total : 359.33 0.18 896156.96 586.94 1012300.38 00:07:29.719 00:07:29.719 [2024-07-25 03:50:44.772962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81b40 (9): Bad file descriptor 00:07:29.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:29.719 03:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.719 03:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:29.719 03:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719182 00:07:29.719 03:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719182 00:07:30.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (719182) - No such process 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 719182 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 719182 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 719182 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.282 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.283 [2024-07-25 03:50:45.292484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=719588 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:30.283 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.283 [2024-07-25 03:50:45.351919] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:30.539 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.539 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:30.539 03:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.104 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.104 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:31.104 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.671 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.671 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:31.671 03:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.236 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.236 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:32.236 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.803 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.803 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:32.803 03:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.061 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.061 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:33.061 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.321 Initializing NVMe Controllers 00:07:33.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.321 Controller IO queue size 128, less than required. 00:07:33.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:33.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:33.321 Initialization complete. Launching workers. 00:07:33.321 ======================================================== 00:07:33.321 Latency(us) 00:07:33.321 Device Information : IOPS MiB/s Average min max 00:07:33.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004438.45 1000212.02 1013337.41 00:07:33.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004172.88 1000214.79 1041334.11 00:07:33.321 ======================================================== 00:07:33.321 Total : 256.00 0.12 1004305.67 1000212.02 1041334.11 00:07:33.321 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719588 00:07:33.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (719588) - No such process 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 719588 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.580 rmmod nvme_tcp 00:07:33.580 rmmod nvme_fabrics 00:07:33.580 rmmod nvme_keyring 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 719039 ']' 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 719039 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 719039 ']' 00:07:33.580 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 719039 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 719039 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 719039' 00:07:33.838 killing process with pid 719039 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 719039 00:07:33.838 03:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 719039 00:07:34.096 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.096 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.096 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.096 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.096 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.097 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.097 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.097 03:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.997 00:07:35.997 real 0m12.167s 00:07:35.997 user 0m27.487s 00:07:35.997 sys 0m2.983s 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 ************************************ 00:07:35.997 END TEST nvmf_delete_subsystem 00:07:35.997 ************************************ 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 ************************************ 00:07:35.997 START TEST nvmf_host_management 00:07:35.997 ************************************ 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.997 * Looking for test storage... 00:07:35.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.997 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.256 03:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.156 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:38.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:38.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:38.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:38.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:38.157 00:07:38.157 --- 10.0.0.2 ping statistics --- 00:07:38.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.157 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:38.157 00:07:38.157 --- 10.0.0.1 ping statistics --- 00:07:38.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.157 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.157 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=721947 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 721947 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 721947 ']' 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.158 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.158 [2024-07-25 03:50:53.409854] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:38.158 [2024-07-25 03:50:53.409923] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.158 [2024-07-25 03:50:53.445695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.415 [2024-07-25 03:50:53.474509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.415 [2024-07-25 03:50:53.565898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.415 [2024-07-25 03:50:53.565958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.415 [2024-07-25 03:50:53.565972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.415 [2024-07-25 03:50:53.565983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.415 [2024-07-25 03:50:53.565996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.415 [2024-07-25 03:50:53.566088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.415 [2024-07-25 03:50:53.566163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.415 [2024-07-25 03:50:53.566232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.415 [2024-07-25 03:50:53.566234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.415 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.672 [2024-07-25 03:50:53.719809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.672 Malloc0 00:07:38.672 [2024-07-25 03:50:53.781032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=721996 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 721996 /var/tmp/bdevperf.sock 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 721996 ']' 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:38.672 { 00:07:38.672 "params": { 00:07:38.672 "name": "Nvme$subsystem", 00:07:38.672 "trtype": "$TEST_TRANSPORT", 00:07:38.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.672 "adrfam": "ipv4", 00:07:38.672 "trsvcid": "$NVMF_PORT", 00:07:38.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.672 "hdgst": ${hdgst:-false}, 00:07:38.672 "ddgst": ${ddgst:-false} 00:07:38.672 }, 00:07:38.672 "method": "bdev_nvme_attach_controller" 00:07:38.672 } 00:07:38.672 EOF 00:07:38.672 )") 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:38.672 03:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:38.672 "params": { 00:07:38.672 "name": "Nvme0", 00:07:38.672 "trtype": "tcp", 00:07:38.672 "traddr": "10.0.0.2", 00:07:38.672 "adrfam": "ipv4", 00:07:38.672 "trsvcid": "4420", 00:07:38.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.672 "hdgst": false, 00:07:38.672 "ddgst": false 00:07:38.672 }, 00:07:38.672 "method": "bdev_nvme_attach_controller" 00:07:38.672 }' 00:07:38.672 [2024-07-25 03:50:53.861285] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:38.672 [2024-07-25 03:50:53.861375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721996 ] 00:07:38.672 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.672 [2024-07-25 03:50:53.893783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.672 [2024-07-25 03:50:53.922824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.929 [2024-07-25 03:50:54.010324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.187 Running I/O for 10 seconds... 00:07:39.187 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.187 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:39.187 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:39.188 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=534 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 534 -ge 100 ']' 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.445 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 [2024-07-25 03:50:54.748385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.748991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35ae0 is same with the state(5) to be set 00:07:39.705 [2024-07-25 03:50:54.749521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.705 [2024-07-25 03:50:54.749572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.705 [2024-07-25 03:50:54.749621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.705 [2024-07-25 03:50:54.749643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.705 [2024-07-25 03:50:54.749660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.749978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.749991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.706 [2024-07-25 03:50:54.750923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.706 [2024-07-25 03:50:54.750937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.750954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.750967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.750984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.750997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.707 [2024-07-25 03:50:54.751665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d45f0 is same with the state(5) to be set 00:07:39.707 [2024-07-25 03:50:54.751767] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d45f0 was disconnected and freed. reset controller. 00:07:39.707 [2024-07-25 03:50:54.751851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.707 [2024-07-25 03:50:54.751873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.707 [2024-07-25 03:50:54.751903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.707 [2024-07-25 03:50:54.751930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.707 [2024-07-25 03:50:54.751957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.707 [2024-07-25 03:50:54.751970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca2b50 is same with the state(5) to be set 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.707 [2024-07-25 03:50:54.753142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:39.707 task offset: 83072 on job bdev=Nvme0n1 fails 00:07:39.707 00:07:39.707 Latency(us) 00:07:39.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.707 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.707 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:39.707 Verification LBA range: start 0x0 length 0x400 00:07:39.707 Nvme0n1 : 0.41 1493.24 93.33 157.70 0.00 37662.35 6019.60 36311.80 00:07:39.707 =================================================================================================================== 00:07:39.707 Total : 1493.24 93.33 157.70 0.00 37662.35 6019.60 36311.80 00:07:39.707 [2024-07-25 03:50:54.755168] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.707 [2024-07-25 03:50:54.755197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca2b50 (9): Bad file descriptor 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.707 03:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:39.707 [2024-07-25 03:50:54.766854] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 721996 00:07:40.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (721996) - No such process 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:40.668 { 00:07:40.668 "params": { 00:07:40.668 "name": "Nvme$subsystem", 00:07:40.668 "trtype": "$TEST_TRANSPORT", 00:07:40.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.668 "adrfam": "ipv4", 00:07:40.668 "trsvcid": "$NVMF_PORT", 00:07:40.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.668 "hdgst": ${hdgst:-false}, 00:07:40.668 "ddgst": ${ddgst:-false} 00:07:40.668 }, 00:07:40.668 "method": "bdev_nvme_attach_controller" 00:07:40.668 } 00:07:40.668 EOF 00:07:40.668 )") 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:40.668 03:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:40.668 "params": { 00:07:40.668 "name": "Nvme0", 00:07:40.668 "trtype": "tcp", 00:07:40.668 "traddr": "10.0.0.2", 00:07:40.668 "adrfam": "ipv4", 00:07:40.668 "trsvcid": "4420", 00:07:40.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.668 "hdgst": false, 00:07:40.668 "ddgst": false 00:07:40.668 }, 00:07:40.668 "method": "bdev_nvme_attach_controller" 00:07:40.668 }' 00:07:40.668 [2024-07-25 03:50:55.808181] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:40.668 [2024-07-25 03:50:55.808285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722280 ] 00:07:40.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.668 [2024-07-25 03:50:55.839391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.668 [2024-07-25 03:50:55.868342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.668 [2024-07-25 03:50:55.958612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.233 Running I/O for 1 seconds... 00:07:42.165 00:07:42.165 Latency(us) 00:07:42.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.165 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:42.165 Verification LBA range: start 0x0 length 0x400 00:07:42.165 Nvme0n1 : 1.01 1587.42 99.21 0.00 0.00 39673.52 7136.14 32816.55 00:07:42.165 =================================================================================================================== 00:07:42.165 Total : 1587.42 99.21 0.00 0.00 39673.52 7136.14 32816.55 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.422 rmmod nvme_tcp 00:07:42.422 rmmod nvme_fabrics 00:07:42.422 rmmod nvme_keyring 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 721947 ']' 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 721947 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 721947 ']' 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 721947 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 721947 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 721947' 00:07:42.422 killing process with pid 721947 00:07:42.422 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 721947 00:07:42.423 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 721947 00:07:42.681 [2024-07-25 03:50:57.884748] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.681 03:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:45.210 00:07:45.210 real 0m8.718s 00:07:45.210 user 0m20.421s 00:07:45.210 sys 0m2.579s 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.210 ************************************ 00:07:45.210 END TEST nvmf_host_management 00:07:45.210 ************************************ 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.210 ************************************ 00:07:45.210 START TEST nvmf_lvol 00:07:45.210 ************************************ 00:07:45.210 03:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.210 * Looking for test storage... 00:07:45.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.210 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.211 03:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.108 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:47.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:47.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:47.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:47.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.109 03:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:47.109 00:07:47.109 --- 10.0.0.2 ping statistics --- 00:07:47.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.109 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:07:47.109 00:07:47.109 --- 10.0.0.1 ping statistics --- 00:07:47.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.109 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.109 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=724584 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 724584 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 724584 ']' 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.110 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.110 [2024-07-25 03:51:02.150686] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:47.110 [2024-07-25 03:51:02.150770] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.110 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.110 [2024-07-25 03:51:02.189619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.110 [2024-07-25 03:51:02.216065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.110 [2024-07-25 03:51:02.306159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.110 [2024-07-25 03:51:02.306209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.110 [2024-07-25 03:51:02.306236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.110 [2024-07-25 03:51:02.306258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.110 [2024-07-25 03:51:02.306269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.110 [2024-07-25 03:51:02.306351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.110 [2024-07-25 03:51:02.306424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.110 [2024-07-25 03:51:02.306421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.367 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.367 [2024-07-25 03:51:02.665778] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.624 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:47.881 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:47.881 03:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.138 03:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:48.138 03:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:48.395 03:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:48.652 03:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cd427747-f1cd-4810-9f0d-38028a041b41 00:07:48.652 03:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cd427747-f1cd-4810-9f0d-38028a041b41 lvol 20 00:07:48.909 03:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bbce4696-0a57-494e-8bfe-4fbbe7bfe5c6 00:07:48.909 03:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.166 03:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bbce4696-0a57-494e-8bfe-4fbbe7bfe5c6 00:07:49.423 03:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.681 [2024-07-25 03:51:04.771009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.681 03:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.938 03:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=724899 00:07:49.938 03:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:49.938 03:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:49.938 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.872 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bbce4696-0a57-494e-8bfe-4fbbe7bfe5c6 MY_SNAPSHOT 00:07:51.130 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2edcace1-2e37-4209-8820-ce06cebc1e74 00:07:51.130 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bbce4696-0a57-494e-8bfe-4fbbe7bfe5c6 30 00:07:51.387 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2edcace1-2e37-4209-8820-ce06cebc1e74 MY_CLONE 00:07:51.953 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8d40ed7a-80da-4bcd-89a9-cd62aa465158 00:07:51.953 03:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8d40ed7a-80da-4bcd-89a9-cd62aa465158 00:07:52.518 03:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 724899 00:08:00.622 Initializing NVMe Controllers 00:08:00.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:00.622 Controller IO queue size 128, less than required. 00:08:00.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:00.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:00.622 Initialization complete. Launching workers. 00:08:00.622 ======================================================== 00:08:00.622 Latency(us) 00:08:00.622 Device Information : IOPS MiB/s Average min max 00:08:00.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10649.30 41.60 12023.57 426.12 71655.86 00:08:00.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9931.00 38.79 12896.47 2192.72 63962.19 00:08:00.622 ======================================================== 00:08:00.622 Total : 20580.30 80.39 12444.79 426.12 71655.86 00:08:00.622 00:08:00.622 03:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.622 03:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bbce4696-0a57-494e-8bfe-4fbbe7bfe5c6 00:08:00.622 03:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd427747-f1cd-4810-9f0d-38028a041b41 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.880 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.880 rmmod nvme_tcp 00:08:00.880 rmmod nvme_fabrics 00:08:01.137 rmmod nvme_keyring 00:08:01.137 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 724584 ']' 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 724584 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 724584 ']' 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 724584 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 724584 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 724584' 00:08:01.138 killing process with pid 724584 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 724584 00:08:01.138 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 724584 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.404 03:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.334 00:08:03.334 real 0m18.553s 00:08:03.334 user 1m4.033s 00:08:03.334 sys 0m5.390s 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.334 ************************************ 00:08:03.334 END TEST nvmf_lvol 00:08:03.334 ************************************ 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.334 ************************************ 00:08:03.334 START TEST nvmf_lvs_grow 00:08:03.334 ************************************ 00:08:03.334 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:03.593 * Looking for test storage... 00:08:03.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.593 03:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.495 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:08:05.496 00:08:05.496 --- 10.0.0.2 ping statistics --- 00:08:05.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.496 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:05.496 00:08:05.496 --- 10.0.0.1 ping statistics --- 00:08:05.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.496 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=728668 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 728668 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 728668 ']' 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.496 03:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.754 [2024-07-25 03:51:20.795338] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:05.754 [2024-07-25 03:51:20.795417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.754 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.754 [2024-07-25 03:51:20.834012] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.754 [2024-07-25 03:51:20.866153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.754 [2024-07-25 03:51:20.956382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.754 [2024-07-25 03:51:20.956428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.754 [2024-07-25 03:51:20.956443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.754 [2024-07-25 03:51:20.956455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.754 [2024-07-25 03:51:20.956465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.754 [2024-07-25 03:51:20.956499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.012 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.013 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:06.270 [2024-07-25 03:51:21.324837] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.270 ************************************ 00:08:06.270 START TEST lvs_grow_clean 00:08:06.270 ************************************ 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.270 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.271 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.271 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.271 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.528 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:06.528 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:06.785 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:06.785 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:06.786 03:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.043 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.043 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.043 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6e4ed103-dc90-4895-a8d4-7f6274341849 lvol 150 00:08:07.300 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c7962c3-b765-49d4-a4f5-e4f1391630d2 00:08:07.300 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.300 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:07.557 [2024-07-25 03:51:22.639503] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:07.557 [2024-07-25 03:51:22.639589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:07.557 true 00:08:07.557 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:07.558 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:07.814 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:07.814 03:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.071 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c7962c3-b765-49d4-a4f5-e4f1391630d2 00:08:08.328 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:08.585 [2024-07-25 03:51:23.630578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.585 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=729113 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 729113 /var/tmp/bdevperf.sock 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 729113 ']' 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.843 03:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:08.843 [2024-07-25 03:51:23.927964] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:08.843 [2024-07-25 03:51:23.928047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729113 ] 00:08:08.843 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.843 [2024-07-25 03:51:23.959887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.843 [2024-07-25 03:51:23.989611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.843 [2024-07-25 03:51:24.079773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.100 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.100 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:09.100 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.664 Nvme0n1 00:08:09.664 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.664 [ 00:08:09.664 { 00:08:09.664 "name": "Nvme0n1", 00:08:09.664 "aliases": [ 00:08:09.664 "8c7962c3-b765-49d4-a4f5-e4f1391630d2" 00:08:09.664 ], 00:08:09.664 "product_name": "NVMe disk", 00:08:09.664 "block_size": 4096, 00:08:09.664 "num_blocks": 38912, 00:08:09.664 "uuid": "8c7962c3-b765-49d4-a4f5-e4f1391630d2", 00:08:09.664 "assigned_rate_limits": { 00:08:09.664 "rw_ios_per_sec": 0, 00:08:09.664 "rw_mbytes_per_sec": 0, 00:08:09.664 "r_mbytes_per_sec": 0, 00:08:09.664 "w_mbytes_per_sec": 0 00:08:09.664 }, 00:08:09.664 "claimed": false, 00:08:09.664 "zoned": false, 00:08:09.664 "supported_io_types": { 00:08:09.664 "read": true, 00:08:09.664 "write": true, 00:08:09.664 "unmap": true, 00:08:09.664 "flush": true, 00:08:09.664 "reset": true, 00:08:09.664 "nvme_admin": true, 00:08:09.664 "nvme_io": true, 00:08:09.664 "nvme_io_md": false, 00:08:09.664 "write_zeroes": true, 00:08:09.664 "zcopy": false, 00:08:09.664 "get_zone_info": false, 00:08:09.664 "zone_management": false, 00:08:09.664 "zone_append": false, 00:08:09.664 "compare": true, 00:08:09.664 "compare_and_write": true, 00:08:09.664 "abort": true, 00:08:09.664 "seek_hole": false, 00:08:09.664 "seek_data": false, 00:08:09.664 "copy": true, 00:08:09.664 "nvme_iov_md": false 00:08:09.664 }, 00:08:09.664 "memory_domains": [ 00:08:09.664 { 00:08:09.664 "dma_device_id": "system", 00:08:09.664 "dma_device_type": 1 00:08:09.664 } 00:08:09.664 ], 00:08:09.664 "driver_specific": { 00:08:09.664 "nvme": [ 00:08:09.664 { 00:08:09.664 "trid": { 00:08:09.664 "trtype": "TCP", 00:08:09.664 "adrfam": "IPv4", 00:08:09.664 "traddr": "10.0.0.2", 00:08:09.664 "trsvcid": "4420", 00:08:09.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.664 }, 00:08:09.664 "ctrlr_data": { 00:08:09.664 "cntlid": 1, 00:08:09.664 "vendor_id": "0x8086", 00:08:09.664 "model_number": "SPDK bdev Controller", 00:08:09.664 "serial_number": "SPDK0", 00:08:09.664 "firmware_revision": "24.09", 00:08:09.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.664 "oacs": { 00:08:09.664 "security": 0, 00:08:09.664 "format": 0, 00:08:09.664 "firmware": 0, 00:08:09.664 "ns_manage": 0 00:08:09.664 }, 00:08:09.664 "multi_ctrlr": true, 00:08:09.664 "ana_reporting": false 00:08:09.664 }, 00:08:09.664 "vs": { 00:08:09.664 "nvme_version": "1.3" 00:08:09.664 }, 00:08:09.664 "ns_data": { 00:08:09.664 "id": 1, 00:08:09.664 "can_share": true 00:08:09.664 } 00:08:09.664 } 00:08:09.664 ], 00:08:09.664 "mp_policy": "active_passive" 00:08:09.664 } 00:08:09.664 } 00:08:09.664 ] 00:08:09.664 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=729249 00:08:09.664 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.664 03:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.921 Running I/O for 10 seconds... 00:08:10.854 Latency(us) 00:08:10.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.854 Nvme0n1 : 1.00 14039.00 54.84 0.00 0.00 0.00 0.00 0.00 00:08:10.854 =================================================================================================================== 00:08:10.854 Total : 14039.00 54.84 0.00 0.00 0.00 0.00 0.00 00:08:10.854 00:08:11.786 03:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:11.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.786 Nvme0n1 : 2.00 14230.00 55.59 0.00 0.00 0.00 0.00 0.00 00:08:11.786 =================================================================================================================== 00:08:11.786 Total : 14230.00 55.59 0.00 0.00 0.00 0.00 0.00 00:08:11.786 00:08:12.044 true 00:08:12.044 03:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:12.044 03:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.302 03:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.302 03:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.302 03:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 729249 00:08:12.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.867 Nvme0n1 : 3.00 14376.67 56.16 0.00 0.00 0.00 0.00 0.00 00:08:12.867 =================================================================================================================== 00:08:12.867 Total : 14376.67 56.16 0.00 0.00 0.00 0.00 0.00 00:08:12.867 00:08:13.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.800 Nvme0n1 : 4.00 14481.50 56.57 0.00 0.00 0.00 0.00 0.00 00:08:13.800 =================================================================================================================== 00:08:13.800 Total : 14481.50 56.57 0.00 0.00 0.00 0.00 0.00 00:08:13.800 00:08:15.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.171 Nvme0n1 : 5.00 14512.20 56.69 0.00 0.00 0.00 0.00 0.00 00:08:15.171 =================================================================================================================== 00:08:15.171 Total : 14512.20 56.69 0.00 0.00 0.00 0.00 0.00 00:08:15.171 00:08:16.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.105 Nvme0n1 : 6.00 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:08:16.105 =================================================================================================================== 00:08:16.105 Total : 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:08:16.105 00:08:17.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.036 Nvme0n1 : 7.00 14560.71 56.88 0.00 0.00 0.00 0.00 0.00 00:08:17.036 =================================================================================================================== 00:08:17.036 Total : 14560.71 56.88 0.00 0.00 0.00 0.00 0.00 00:08:17.036 00:08:17.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.998 Nvme0n1 : 8.00 14584.38 56.97 0.00 0.00 0.00 0.00 0.00 00:08:17.998 =================================================================================================================== 00:08:17.998 Total : 14584.38 56.97 0.00 0.00 0.00 0.00 0.00 00:08:17.998 00:08:18.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.930 Nvme0n1 : 9.00 14622.44 57.12 0.00 0.00 0.00 0.00 0.00 00:08:18.930 =================================================================================================================== 00:08:18.930 Total : 14622.44 57.12 0.00 0.00 0.00 0.00 0.00 00:08:18.930 00:08:19.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.863 Nvme0n1 : 10.00 14659.30 57.26 0.00 0.00 0.00 0.00 0.00 00:08:19.863 =================================================================================================================== 00:08:19.863 Total : 14659.30 57.26 0.00 0.00 0.00 0.00 0.00 00:08:19.863 00:08:19.863 00:08:19.863 Latency(us) 00:08:19.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.863 Nvme0n1 : 10.00 14666.35 57.29 0.00 0.00 8722.56 5170.06 17476.27 00:08:19.863 =================================================================================================================== 00:08:19.863 Total : 14666.35 57.29 0.00 0.00 8722.56 5170.06 17476.27 00:08:19.863 0 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 729113 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 729113 ']' 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 729113 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 729113 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 729113' 00:08:19.863 killing process with pid 729113 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 729113 00:08:19.863 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.863 00:08:19.863 Latency(us) 00:08:19.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.863 =================================================================================================================== 00:08:19.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.863 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 729113 00:08:20.121 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.378 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.636 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:20.636 03:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:20.893 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:20.893 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:20.893 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.151 [2024-07-25 03:51:36.373114] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.151 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:21.409 request: 00:08:21.409 { 00:08:21.409 "uuid": "6e4ed103-dc90-4895-a8d4-7f6274341849", 00:08:21.409 "method": "bdev_lvol_get_lvstores", 00:08:21.409 "req_id": 1 00:08:21.409 } 00:08:21.409 Got JSON-RPC error response 00:08:21.409 response: 00:08:21.409 { 00:08:21.409 "code": -19, 00:08:21.409 "message": "No such device" 00:08:21.409 } 00:08:21.409 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:21.409 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.409 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.409 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.409 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.667 aio_bdev 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8c7962c3-b765-49d4-a4f5-e4f1391630d2 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8c7962c3-b765-49d4-a4f5-e4f1391630d2 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.667 03:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.925 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c7962c3-b765-49d4-a4f5-e4f1391630d2 -t 2000 00:08:22.183 [ 00:08:22.183 { 00:08:22.183 "name": "8c7962c3-b765-49d4-a4f5-e4f1391630d2", 00:08:22.183 "aliases": [ 00:08:22.183 "lvs/lvol" 00:08:22.183 ], 00:08:22.183 "product_name": "Logical Volume", 00:08:22.183 "block_size": 4096, 00:08:22.183 "num_blocks": 38912, 00:08:22.183 "uuid": "8c7962c3-b765-49d4-a4f5-e4f1391630d2", 00:08:22.183 "assigned_rate_limits": { 00:08:22.183 "rw_ios_per_sec": 0, 00:08:22.183 "rw_mbytes_per_sec": 0, 00:08:22.183 "r_mbytes_per_sec": 0, 00:08:22.183 "w_mbytes_per_sec": 0 00:08:22.183 }, 00:08:22.183 "claimed": false, 00:08:22.183 "zoned": false, 00:08:22.183 "supported_io_types": { 00:08:22.183 "read": true, 00:08:22.183 "write": true, 00:08:22.183 "unmap": true, 00:08:22.183 "flush": false, 00:08:22.183 "reset": true, 00:08:22.183 "nvme_admin": false, 00:08:22.183 "nvme_io": false, 00:08:22.183 "nvme_io_md": false, 00:08:22.183 "write_zeroes": true, 00:08:22.183 "zcopy": false, 00:08:22.183 "get_zone_info": false, 00:08:22.183 "zone_management": false, 00:08:22.183 "zone_append": false, 00:08:22.183 "compare": false, 00:08:22.183 "compare_and_write": false, 00:08:22.183 "abort": false, 00:08:22.183 "seek_hole": true, 00:08:22.183 "seek_data": true, 00:08:22.183 "copy": false, 00:08:22.183 "nvme_iov_md": false 00:08:22.183 }, 00:08:22.183 "driver_specific": { 00:08:22.183 "lvol": { 00:08:22.183 "lvol_store_uuid": "6e4ed103-dc90-4895-a8d4-7f6274341849", 00:08:22.183 "base_bdev": "aio_bdev", 00:08:22.183 "thin_provision": false, 00:08:22.183 "num_allocated_clusters": 38, 00:08:22.183 "snapshot": false, 00:08:22.183 "clone": false, 00:08:22.183 "esnap_clone": false 00:08:22.183 } 00:08:22.183 } 00:08:22.183 } 00:08:22.183 ] 00:08:22.183 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:22.183 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:22.183 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:22.441 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:22.441 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:22.441 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:22.699 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:22.699 03:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c7962c3-b765-49d4-a4f5-e4f1391630d2 00:08:22.957 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e4ed103-dc90-4895-a8d4-7f6274341849 00:08:23.216 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.474 00:08:23.474 real 0m17.331s 00:08:23.474 user 0m16.758s 00:08:23.474 sys 0m1.843s 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:23.474 ************************************ 00:08:23.474 END TEST lvs_grow_clean 00:08:23.474 ************************************ 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.474 ************************************ 00:08:23.474 START TEST lvs_grow_dirty 00:08:23.474 ************************************ 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.474 03:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.732 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:23.732 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:23.990 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:23.990 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:23.990 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.247 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.247 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.248 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56709fc0-000c-4234-8fb7-103fe88d5fcf lvol 150 00:08:24.505 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:24.505 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.505 03:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:24.764 [2024-07-25 03:51:40.033698] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:24.764 [2024-07-25 03:51:40.033808] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:24.764 true 00:08:24.764 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:24.764 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:25.022 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.022 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.280 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:25.538 03:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.796 [2024-07-25 03:51:41.060856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.796 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=731175 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 731175 /var/tmp/bdevperf.sock 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 731175 ']' 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.054 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.312 [2024-07-25 03:51:41.357680] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:26.312 [2024-07-25 03:51:41.357762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731175 ] 00:08:26.312 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.312 [2024-07-25 03:51:41.391890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.312 [2024-07-25 03:51:41.422314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.312 [2024-07-25 03:51:41.513686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.570 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.570 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:26.570 03:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:26.828 Nvme0n1 00:08:26.828 03:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.086 [ 00:08:27.086 { 00:08:27.086 "name": "Nvme0n1", 00:08:27.086 "aliases": [ 00:08:27.086 "ffa2efda-34bc-4e14-9017-7b97eca4ed82" 00:08:27.086 ], 00:08:27.086 "product_name": "NVMe disk", 00:08:27.086 "block_size": 4096, 00:08:27.086 "num_blocks": 38912, 00:08:27.086 "uuid": "ffa2efda-34bc-4e14-9017-7b97eca4ed82", 00:08:27.086 "assigned_rate_limits": { 00:08:27.086 "rw_ios_per_sec": 0, 00:08:27.086 "rw_mbytes_per_sec": 0, 00:08:27.086 "r_mbytes_per_sec": 0, 00:08:27.086 "w_mbytes_per_sec": 0 00:08:27.086 }, 00:08:27.086 "claimed": false, 00:08:27.086 "zoned": false, 00:08:27.086 "supported_io_types": { 00:08:27.086 "read": true, 00:08:27.086 "write": true, 00:08:27.086 "unmap": true, 00:08:27.086 "flush": true, 00:08:27.086 "reset": true, 00:08:27.086 "nvme_admin": true, 00:08:27.086 "nvme_io": true, 00:08:27.086 "nvme_io_md": false, 00:08:27.086 "write_zeroes": true, 00:08:27.086 "zcopy": false, 00:08:27.086 "get_zone_info": false, 00:08:27.086 "zone_management": false, 00:08:27.086 "zone_append": false, 00:08:27.086 "compare": true, 00:08:27.086 "compare_and_write": true, 00:08:27.086 "abort": true, 00:08:27.086 "seek_hole": false, 00:08:27.086 "seek_data": false, 00:08:27.086 "copy": true, 00:08:27.086 "nvme_iov_md": false 00:08:27.086 }, 00:08:27.086 "memory_domains": [ 00:08:27.086 { 00:08:27.086 "dma_device_id": "system", 00:08:27.086 "dma_device_type": 1 00:08:27.086 } 00:08:27.086 ], 00:08:27.086 "driver_specific": { 00:08:27.086 "nvme": [ 00:08:27.086 { 00:08:27.086 "trid": { 00:08:27.086 "trtype": "TCP", 00:08:27.086 "adrfam": "IPv4", 00:08:27.086 "traddr": "10.0.0.2", 00:08:27.086 "trsvcid": "4420", 00:08:27.086 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.086 }, 00:08:27.086 "ctrlr_data": { 00:08:27.086 "cntlid": 1, 00:08:27.086 "vendor_id": "0x8086", 00:08:27.086 "model_number": "SPDK bdev Controller", 00:08:27.086 "serial_number": "SPDK0", 00:08:27.086 "firmware_revision": "24.09", 00:08:27.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.086 "oacs": { 00:08:27.086 "security": 0, 00:08:27.086 "format": 0, 00:08:27.086 "firmware": 0, 00:08:27.086 "ns_manage": 0 00:08:27.086 }, 00:08:27.086 "multi_ctrlr": true, 00:08:27.086 "ana_reporting": false 00:08:27.086 }, 00:08:27.086 "vs": { 00:08:27.086 "nvme_version": "1.3" 00:08:27.086 }, 00:08:27.086 "ns_data": { 00:08:27.086 "id": 1, 00:08:27.086 "can_share": true 00:08:27.086 } 00:08:27.086 } 00:08:27.086 ], 00:08:27.086 "mp_policy": "active_passive" 00:08:27.086 } 00:08:27.086 } 00:08:27.086 ] 00:08:27.086 03:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=731311 00:08:27.086 03:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.086 03:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.344 Running I/O for 10 seconds... 00:08:28.287 Latency(us) 00:08:28.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.287 Nvme0n1 : 1.00 13443.00 52.51 0.00 0.00 0.00 0.00 0.00 00:08:28.287 =================================================================================================================== 00:08:28.287 Total : 13443.00 52.51 0.00 0.00 0.00 0.00 0.00 00:08:28.287 00:08:29.218 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:29.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.218 Nvme0n1 : 2.00 13541.50 52.90 0.00 0.00 0.00 0.00 0.00 00:08:29.218 =================================================================================================================== 00:08:29.218 Total : 13541.50 52.90 0.00 0.00 0.00 0.00 0.00 00:08:29.218 00:08:29.475 true 00:08:29.475 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:29.475 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.732 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.732 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.732 03:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 731311 00:08:30.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.296 Nvme0n1 : 3.00 13603.67 53.14 0.00 0.00 0.00 0.00 0.00 00:08:30.296 =================================================================================================================== 00:08:30.296 Total : 13603.67 53.14 0.00 0.00 0.00 0.00 0.00 00:08:30.296 00:08:31.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.229 Nvme0n1 : 4.00 13662.75 53.37 0.00 0.00 0.00 0.00 0.00 00:08:31.229 =================================================================================================================== 00:08:31.229 Total : 13662.75 53.37 0.00 0.00 0.00 0.00 0.00 00:08:31.229 00:08:32.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.162 Nvme0n1 : 5.00 13701.40 53.52 0.00 0.00 0.00 0.00 0.00 00:08:32.162 =================================================================================================================== 00:08:32.162 Total : 13701.40 53.52 0.00 0.00 0.00 0.00 0.00 00:08:32.162 00:08:33.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.145 Nvme0n1 : 6.00 13739.17 53.67 0.00 0.00 0.00 0.00 0.00 00:08:33.145 =================================================================================================================== 00:08:33.145 Total : 13739.17 53.67 0.00 0.00 0.00 0.00 0.00 00:08:33.145 00:08:34.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.519 Nvme0n1 : 7.00 13770.71 53.79 0.00 0.00 0.00 0.00 0.00 00:08:34.519 =================================================================================================================== 00:08:34.519 Total : 13770.71 53.79 0.00 0.00 0.00 0.00 0.00 00:08:34.519 00:08:35.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.453 Nvme0n1 : 8.00 13792.38 53.88 0.00 0.00 0.00 0.00 0.00 00:08:35.453 =================================================================================================================== 00:08:35.453 Total : 13792.38 53.88 0.00 0.00 0.00 0.00 0.00 00:08:35.453 00:08:36.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.385 Nvme0n1 : 9.00 13815.44 53.97 0.00 0.00 0.00 0.00 0.00 00:08:36.385 =================================================================================================================== 00:08:36.385 Total : 13815.44 53.97 0.00 0.00 0.00 0.00 0.00 00:08:36.385 00:08:37.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.319 Nvme0n1 : 10.00 13831.50 54.03 0.00 0.00 0.00 0.00 0.00 00:08:37.319 =================================================================================================================== 00:08:37.319 Total : 13831.50 54.03 0.00 0.00 0.00 0.00 0.00 00:08:37.319 00:08:37.319 00:08:37.319 Latency(us) 00:08:37.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.319 Nvme0n1 : 10.01 13831.53 54.03 0.00 0.00 9245.09 2390.85 11699.39 00:08:37.319 =================================================================================================================== 00:08:37.319 Total : 13831.53 54.03 0.00 0.00 9245.09 2390.85 11699.39 00:08:37.319 0 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 731175 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 731175 ']' 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 731175 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 731175 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 731175' 00:08:37.319 killing process with pid 731175 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 731175 00:08:37.319 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.319 00:08:37.319 Latency(us) 00:08:37.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.319 =================================================================================================================== 00:08:37.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.319 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 731175 00:08:37.577 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.835 03:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.093 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:38.093 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 728668 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 728668 00:08:38.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 728668 Killed "${NVMF_APP[@]}" "$@" 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=732649 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 732649 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 732649 ']' 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.351 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.351 [2024-07-25 03:51:53.569889] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:38.351 [2024-07-25 03:51:53.569984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.351 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.351 [2024-07-25 03:51:53.616054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.351 [2024-07-25 03:51:53.646431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.610 [2024-07-25 03:51:53.740502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.610 [2024-07-25 03:51:53.740567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.610 [2024-07-25 03:51:53.740583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.610 [2024-07-25 03:51:53.740596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.610 [2024-07-25 03:51:53.740608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.610 [2024-07-25 03:51:53.740647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.610 03:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.868 [2024-07-25 03:51:54.114029] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:38.868 [2024-07-25 03:51:54.114162] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:38.868 [2024-07-25 03:51:54.114217] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.868 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.126 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffa2efda-34bc-4e14-9017-7b97eca4ed82 -t 2000 00:08:39.385 [ 00:08:39.385 { 00:08:39.385 "name": "ffa2efda-34bc-4e14-9017-7b97eca4ed82", 00:08:39.385 "aliases": [ 00:08:39.385 "lvs/lvol" 00:08:39.385 ], 00:08:39.385 "product_name": "Logical Volume", 00:08:39.385 "block_size": 4096, 00:08:39.385 "num_blocks": 38912, 00:08:39.385 "uuid": "ffa2efda-34bc-4e14-9017-7b97eca4ed82", 00:08:39.385 "assigned_rate_limits": { 00:08:39.385 "rw_ios_per_sec": 0, 00:08:39.385 "rw_mbytes_per_sec": 0, 00:08:39.385 "r_mbytes_per_sec": 0, 00:08:39.385 "w_mbytes_per_sec": 0 00:08:39.385 }, 00:08:39.385 "claimed": false, 00:08:39.385 "zoned": false, 00:08:39.385 "supported_io_types": { 00:08:39.385 "read": true, 00:08:39.385 "write": true, 00:08:39.385 "unmap": true, 00:08:39.385 "flush": false, 00:08:39.385 "reset": true, 00:08:39.385 "nvme_admin": false, 00:08:39.385 "nvme_io": false, 00:08:39.385 "nvme_io_md": false, 00:08:39.385 "write_zeroes": true, 00:08:39.385 "zcopy": false, 00:08:39.385 "get_zone_info": false, 00:08:39.385 "zone_management": false, 00:08:39.385 "zone_append": false, 00:08:39.385 "compare": false, 00:08:39.385 "compare_and_write": false, 00:08:39.385 "abort": false, 00:08:39.385 "seek_hole": true, 00:08:39.385 "seek_data": true, 00:08:39.385 "copy": false, 00:08:39.385 "nvme_iov_md": false 00:08:39.385 }, 00:08:39.385 "driver_specific": { 00:08:39.385 "lvol": { 00:08:39.385 "lvol_store_uuid": "56709fc0-000c-4234-8fb7-103fe88d5fcf", 00:08:39.385 "base_bdev": "aio_bdev", 00:08:39.385 "thin_provision": false, 00:08:39.385 "num_allocated_clusters": 38, 00:08:39.385 "snapshot": false, 00:08:39.385 "clone": false, 00:08:39.385 "esnap_clone": false 00:08:39.385 } 00:08:39.385 } 00:08:39.385 } 00:08:39.385 ] 00:08:39.385 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:39.385 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:39.385 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:39.643 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:39.643 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:39.643 03:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:39.901 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:39.901 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.160 [2024-07-25 03:51:55.399116] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.160 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:40.418 request: 00:08:40.418 { 00:08:40.418 "uuid": "56709fc0-000c-4234-8fb7-103fe88d5fcf", 00:08:40.418 "method": "bdev_lvol_get_lvstores", 00:08:40.418 "req_id": 1 00:08:40.418 } 00:08:40.418 Got JSON-RPC error response 00:08:40.418 response: 00:08:40.418 { 00:08:40.418 "code": -19, 00:08:40.418 "message": "No such device" 00:08:40.418 } 00:08:40.418 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:40.418 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.418 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.418 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.418 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.676 aio_bdev 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.676 03:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.934 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffa2efda-34bc-4e14-9017-7b97eca4ed82 -t 2000 00:08:41.192 [ 00:08:41.192 { 00:08:41.192 "name": "ffa2efda-34bc-4e14-9017-7b97eca4ed82", 00:08:41.192 "aliases": [ 00:08:41.192 "lvs/lvol" 00:08:41.192 ], 00:08:41.192 "product_name": "Logical Volume", 00:08:41.192 "block_size": 4096, 00:08:41.192 "num_blocks": 38912, 00:08:41.192 "uuid": "ffa2efda-34bc-4e14-9017-7b97eca4ed82", 00:08:41.192 "assigned_rate_limits": { 00:08:41.192 "rw_ios_per_sec": 0, 00:08:41.192 "rw_mbytes_per_sec": 0, 00:08:41.192 "r_mbytes_per_sec": 0, 00:08:41.192 "w_mbytes_per_sec": 0 00:08:41.192 }, 00:08:41.192 "claimed": false, 00:08:41.192 "zoned": false, 00:08:41.192 "supported_io_types": { 00:08:41.192 "read": true, 00:08:41.192 "write": true, 00:08:41.192 "unmap": true, 00:08:41.192 "flush": false, 00:08:41.192 "reset": true, 00:08:41.192 "nvme_admin": false, 00:08:41.192 "nvme_io": false, 00:08:41.192 "nvme_io_md": false, 00:08:41.192 "write_zeroes": true, 00:08:41.192 "zcopy": false, 00:08:41.192 "get_zone_info": false, 00:08:41.192 "zone_management": false, 00:08:41.192 "zone_append": false, 00:08:41.192 "compare": false, 00:08:41.192 "compare_and_write": false, 00:08:41.192 "abort": false, 00:08:41.192 "seek_hole": true, 00:08:41.192 "seek_data": true, 00:08:41.192 "copy": false, 00:08:41.192 "nvme_iov_md": false 00:08:41.192 }, 00:08:41.192 "driver_specific": { 00:08:41.192 "lvol": { 00:08:41.192 "lvol_store_uuid": "56709fc0-000c-4234-8fb7-103fe88d5fcf", 00:08:41.192 "base_bdev": "aio_bdev", 00:08:41.192 "thin_provision": false, 00:08:41.192 "num_allocated_clusters": 38, 00:08:41.192 "snapshot": false, 00:08:41.192 "clone": false, 00:08:41.192 "esnap_clone": false 00:08:41.192 } 00:08:41.192 } 00:08:41.192 } 00:08:41.192 ] 00:08:41.192 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:41.192 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:41.192 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.450 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.450 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:41.450 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.708 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.708 03:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ffa2efda-34bc-4e14-9017-7b97eca4ed82 00:08:41.966 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56709fc0-000c-4234-8fb7-103fe88d5fcf 00:08:42.224 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.483 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.483 00:08:42.483 real 0m19.006s 00:08:42.483 user 0m47.775s 00:08:42.483 sys 0m4.831s 00:08:42.483 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.483 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.483 ************************************ 00:08:42.483 END TEST lvs_grow_dirty 00:08:42.483 ************************************ 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:42.741 nvmf_trace.0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.741 rmmod nvme_tcp 00:08:42.741 rmmod nvme_fabrics 00:08:42.741 rmmod nvme_keyring 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 732649 ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 732649 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 732649 ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 732649 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 732649 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 732649' 00:08:42.741 killing process with pid 732649 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 732649 00:08:42.741 03:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 732649 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.000 03:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.900 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.900 00:08:44.900 real 0m41.585s 00:08:44.900 user 1m10.174s 00:08:44.900 sys 0m8.521s 00:08:44.900 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.900 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.900 ************************************ 00:08:44.900 END TEST nvmf_lvs_grow 00:08:44.900 ************************************ 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.159 ************************************ 00:08:45.159 START TEST nvmf_bdev_io_wait 00:08:45.159 ************************************ 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.159 * Looking for test storage... 00:08:45.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.159 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.160 03:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.060 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:47.318 00:08:47.318 --- 10.0.0.2 ping statistics --- 00:08:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.318 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:47.318 00:08:47.318 --- 10.0.0.1 ping statistics --- 00:08:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.318 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=735168 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 735168 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 735168 ']' 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.318 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 [2024-07-25 03:52:02.497565] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:47.318 [2024-07-25 03:52:02.497663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.318 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.318 [2024-07-25 03:52:02.536225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.318 [2024-07-25 03:52:02.566361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.576 [2024-07-25 03:52:02.661433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.576 [2024-07-25 03:52:02.661492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.576 [2024-07-25 03:52:02.661509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.576 [2024-07-25 03:52:02.661522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.576 [2024-07-25 03:52:02.661534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.576 [2024-07-25 03:52:02.661615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.576 [2024-07-25 03:52:02.661670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.576 [2024-07-25 03:52:02.661784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.576 [2024-07-25 03:52:02.661787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 [2024-07-25 03:52:02.814674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 Malloc0 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.576 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.834 [2024-07-25 03:52:02.878510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=735200 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=735202 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.834 { 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme$subsystem", 00:08:47.834 "trtype": "$TEST_TRANSPORT", 00:08:47.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "$NVMF_PORT", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.834 "hdgst": ${hdgst:-false}, 00:08:47.834 "ddgst": ${ddgst:-false} 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 } 00:08:47.834 EOF 00:08:47.834 )") 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=735204 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=735206 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.834 { 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme$subsystem", 00:08:47.834 "trtype": "$TEST_TRANSPORT", 00:08:47.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "$NVMF_PORT", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.834 "hdgst": ${hdgst:-false}, 00:08:47.834 "ddgst": ${ddgst:-false} 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 } 00:08:47.834 EOF 00:08:47.834 )") 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.834 { 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme$subsystem", 00:08:47.834 "trtype": "$TEST_TRANSPORT", 00:08:47.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "$NVMF_PORT", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.834 "hdgst": ${hdgst:-false}, 00:08:47.834 "ddgst": ${ddgst:-false} 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 } 00:08:47.834 EOF 00:08:47.834 )") 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.834 { 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme$subsystem", 00:08:47.834 "trtype": "$TEST_TRANSPORT", 00:08:47.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "$NVMF_PORT", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.834 "hdgst": ${hdgst:-false}, 00:08:47.834 "ddgst": ${ddgst:-false} 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 } 00:08:47.834 EOF 00:08:47.834 )") 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 735200 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme1", 00:08:47.834 "trtype": "tcp", 00:08:47.834 "traddr": "10.0.0.2", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "4420", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.834 "hdgst": false, 00:08:47.834 "ddgst": false 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 }' 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme1", 00:08:47.834 "trtype": "tcp", 00:08:47.834 "traddr": "10.0.0.2", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "4420", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.834 "hdgst": false, 00:08:47.834 "ddgst": false 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 }' 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme1", 00:08:47.834 "trtype": "tcp", 00:08:47.834 "traddr": "10.0.0.2", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "4420", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.834 "hdgst": false, 00:08:47.834 "ddgst": false 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 }' 00:08:47.834 03:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.834 "params": { 00:08:47.834 "name": "Nvme1", 00:08:47.834 "trtype": "tcp", 00:08:47.834 "traddr": "10.0.0.2", 00:08:47.834 "adrfam": "ipv4", 00:08:47.834 "trsvcid": "4420", 00:08:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.834 "hdgst": false, 00:08:47.834 "ddgst": false 00:08:47.834 }, 00:08:47.834 "method": "bdev_nvme_attach_controller" 00:08:47.834 }' 00:08:47.834 [2024-07-25 03:52:02.926713] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:47.834 [2024-07-25 03:52:02.926769] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:47.834 [2024-07-25 03:52:02.926769] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:47.834 [2024-07-25 03:52:02.926767] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:47.834 [2024-07-25 03:52:02.926792] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:47.834 [2024-07-25 03:52:02.926847] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 03:52:02.926848] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 03:52:02.926848] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:47.834 --proc-type=auto ] 00:08:47.834 --proc-type=auto ] 00:08:47.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.834 [2024-07-25 03:52:03.076076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.834 [2024-07-25 03:52:03.104164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.091 [2024-07-25 03:52:03.173267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.091 [2024-07-25 03:52:03.179167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:48.091 [2024-07-25 03:52:03.203023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.091 [2024-07-25 03:52:03.270472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.091 [2024-07-25 03:52:03.278283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.091 [2024-07-25 03:52:03.300906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.091 [2024-07-25 03:52:03.346710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.091 [2024-07-25 03:52:03.376787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.091 [2024-07-25 03:52:03.378969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:48.348 [2024-07-25 03:52:03.447111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:48.348 Running I/O for 1 seconds... 00:08:48.348 Running I/O for 1 seconds... 00:08:48.348 Running I/O for 1 seconds... 00:08:48.611 Running I/O for 1 seconds... 00:08:49.639 00:08:49.639 Latency(us) 00:08:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.639 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:49.639 Nvme1n1 : 1.02 5718.89 22.34 0.00 0.00 22044.53 7961.41 33787.45 00:08:49.639 =================================================================================================================== 00:08:49.639 Total : 5718.89 22.34 0.00 0.00 22044.53 7961.41 33787.45 00:08:49.639 00:08:49.639 Latency(us) 00:08:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.639 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:49.639 Nvme1n1 : 1.00 189553.45 740.44 0.00 0.00 672.64 270.03 898.09 00:08:49.639 =================================================================================================================== 00:08:49.639 Total : 189553.45 740.44 0.00 0.00 672.64 270.03 898.09 00:08:49.639 00:08:49.639 Latency(us) 00:08:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.639 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:49.639 Nvme1n1 : 1.01 5585.70 21.82 0.00 0.00 22812.54 8349.77 42331.40 00:08:49.639 =================================================================================================================== 00:08:49.639 Total : 5585.70 21.82 0.00 0.00 22812.54 8349.77 42331.40 00:08:49.639 00:08:49.639 Latency(us) 00:08:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.639 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:49.639 Nvme1n1 : 1.01 9863.62 38.53 0.00 0.00 12915.41 6747.78 23981.32 00:08:49.639 =================================================================================================================== 00:08:49.639 Total : 9863.62 38.53 0.00 0.00 12915.41 6747.78 23981.32 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 735202 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 735204 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 735206 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.897 rmmod nvme_tcp 00:08:49.897 rmmod nvme_fabrics 00:08:49.897 rmmod nvme_keyring 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 735168 ']' 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 735168 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 735168 ']' 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 735168 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 735168 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 735168' 00:08:49.897 killing process with pid 735168 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 735168 00:08:49.897 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 735168 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.155 03:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.683 00:08:52.683 real 0m7.146s 00:08:52.683 user 0m16.467s 00:08:52.683 sys 0m3.431s 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.683 ************************************ 00:08:52.683 END TEST nvmf_bdev_io_wait 00:08:52.683 ************************************ 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.683 ************************************ 00:08:52.683 START TEST nvmf_queue_depth 00:08:52.683 ************************************ 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:52.683 * Looking for test storage... 00:08:52.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.683 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.684 03:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:54.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.584 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:54.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:54.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:54.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:54.585 00:08:54.585 --- 10.0.0.2 ping statistics --- 00:08:54.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.585 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:54.585 00:08:54.585 --- 10.0.0.1 ping statistics --- 00:08:54.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.585 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=737430 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 737430 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 737430 ']' 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.585 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.585 [2024-07-25 03:52:09.708347] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:54.585 [2024-07-25 03:52:09.708441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.585 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.585 [2024-07-25 03:52:09.748093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.585 [2024-07-25 03:52:09.775474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.585 [2024-07-25 03:52:09.864715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.585 [2024-07-25 03:52:09.864769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.585 [2024-07-25 03:52:09.864798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.585 [2024-07-25 03:52:09.864809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.585 [2024-07-25 03:52:09.864819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.585 [2024-07-25 03:52:09.864854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.842 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.843 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:54.843 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.843 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.843 03:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 [2024-07-25 03:52:10.010835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 Malloc0 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 [2024-07-25 03:52:10.071398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=737564 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 737564 /var/tmp/bdevperf.sock 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 737564 ']' 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.843 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 [2024-07-25 03:52:10.118971] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:54.843 [2024-07-25 03:52:10.119045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737564 ] 00:08:55.101 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.101 [2024-07-25 03:52:10.150286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.101 [2024-07-25 03:52:10.181471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.101 [2024-07-25 03:52:10.271033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.101 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.101 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:55.101 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:55.101 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.101 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.358 NVMe0n1 00:08:55.358 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.358 03:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.616 Running I/O for 10 seconds... 00:09:05.578 00:09:05.578 Latency(us) 00:09:05.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.578 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:05.578 Verification LBA range: start 0x0 length 0x4000 00:09:05.578 NVMe0n1 : 10.10 8388.26 32.77 0.00 0.00 121523.16 20680.25 74177.04 00:09:05.578 =================================================================================================================== 00:09:05.578 Total : 8388.26 32.77 0.00 0.00 121523.16 20680.25 74177.04 00:09:05.578 0 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 737564 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 737564 ']' 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 737564 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737564 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737564' 00:09:05.578 killing process with pid 737564 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 737564 00:09:05.578 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.578 00:09:05.578 Latency(us) 00:09:05.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.578 =================================================================================================================== 00:09:05.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.578 03:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 737564 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.836 rmmod nvme_tcp 00:09:05.836 rmmod nvme_fabrics 00:09:05.836 rmmod nvme_keyring 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 737430 ']' 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 737430 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 737430 ']' 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 737430 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.836 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737430 00:09:06.093 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:06.093 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:06.093 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737430' 00:09:06.093 killing process with pid 737430 00:09:06.093 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 737430 00:09:06.093 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 737430 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.350 03:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.248 00:09:08.248 real 0m16.038s 00:09:08.248 user 0m22.668s 00:09:08.248 sys 0m2.929s 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.248 ************************************ 00:09:08.248 END TEST nvmf_queue_depth 00:09:08.248 ************************************ 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.248 ************************************ 00:09:08.248 START TEST nvmf_target_multipath 00:09:08.248 ************************************ 00:09:08.248 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.506 * Looking for test storage... 00:09:08.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.506 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.507 03:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:10.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:10.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:10.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:10.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.410 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:09:10.411 00:09:10.411 --- 10.0.0.2 ping statistics --- 00:09:10.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.411 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:09:10.411 00:09:10.411 --- 10.0.0.1 ping statistics --- 00:09:10.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.411 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:10.411 only one NIC for nvmf test 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.411 rmmod nvme_tcp 00:09:10.411 rmmod nvme_fabrics 00:09:10.411 rmmod nvme_keyring 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.411 03:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.959 00:09:12.959 real 0m4.233s 00:09:12.959 user 0m0.744s 00:09:12.959 sys 0m1.463s 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.959 ************************************ 00:09:12.959 END TEST nvmf_target_multipath 00:09:12.959 ************************************ 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.959 ************************************ 00:09:12.959 START TEST nvmf_zcopy 00:09:12.959 ************************************ 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.959 * Looking for test storage... 00:09:12.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.959 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.960 03:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.869 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.869 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:09:14.870 00:09:14.870 --- 10.0.0.2 ping statistics --- 00:09:14.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.870 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:14.870 00:09:14.870 --- 10.0.0.1 ping statistics --- 00:09:14.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.870 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.870 03:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=742633 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 742633 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 742633 ']' 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.870 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.870 [2024-07-25 03:52:30.064292] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:14.870 [2024-07-25 03:52:30.064377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.870 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.870 [2024-07-25 03:52:30.103101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.870 [2024-07-25 03:52:30.131947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.130 [2024-07-25 03:52:30.223917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.131 [2024-07-25 03:52:30.223973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.131 [2024-07-25 03:52:30.223989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.131 [2024-07-25 03:52:30.224001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.131 [2024-07-25 03:52:30.224012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.131 [2024-07-25 03:52:30.224046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 [2024-07-25 03:52:30.371545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 [2024-07-25 03:52:30.387756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.131 malloc0 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.131 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:15.390 { 00:09:15.390 "params": { 00:09:15.390 "name": "Nvme$subsystem", 00:09:15.390 "trtype": "$TEST_TRANSPORT", 00:09:15.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.390 "adrfam": "ipv4", 00:09:15.390 "trsvcid": "$NVMF_PORT", 00:09:15.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.390 "hdgst": ${hdgst:-false}, 00:09:15.390 "ddgst": ${ddgst:-false} 00:09:15.390 }, 00:09:15.390 "method": "bdev_nvme_attach_controller" 00:09:15.390 } 00:09:15.390 EOF 00:09:15.390 )") 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:15.390 03:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:15.390 "params": { 00:09:15.390 "name": "Nvme1", 00:09:15.390 "trtype": "tcp", 00:09:15.390 "traddr": "10.0.0.2", 00:09:15.390 "adrfam": "ipv4", 00:09:15.390 "trsvcid": "4420", 00:09:15.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.390 "hdgst": false, 00:09:15.390 "ddgst": false 00:09:15.390 }, 00:09:15.390 "method": "bdev_nvme_attach_controller" 00:09:15.390 }' 00:09:15.390 [2024-07-25 03:52:30.479855] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:15.390 [2024-07-25 03:52:30.479939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742774 ] 00:09:15.390 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.390 [2024-07-25 03:52:30.513980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.390 [2024-07-25 03:52:30.543705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.391 [2024-07-25 03:52:30.637883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.649 Running I/O for 10 seconds... 00:09:25.614 00:09:25.614 Latency(us) 00:09:25.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:25.614 Verification LBA range: start 0x0 length 0x1000 00:09:25.614 Nvme1n1 : 10.02 5789.72 45.23 0.00 0.00 22045.56 2196.67 30874.74 00:09:25.614 =================================================================================================================== 00:09:25.614 Total : 5789.72 45.23 0.00 0.00 22045.56 2196.67 30874.74 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=743974 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:25.873 { 00:09:25.873 "params": { 00:09:25.873 "name": "Nvme$subsystem", 00:09:25.873 "trtype": "$TEST_TRANSPORT", 00:09:25.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.873 "adrfam": "ipv4", 00:09:25.873 "trsvcid": "$NVMF_PORT", 00:09:25.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.873 "hdgst": ${hdgst:-false}, 00:09:25.873 "ddgst": ${ddgst:-false} 00:09:25.873 }, 00:09:25.873 "method": "bdev_nvme_attach_controller" 00:09:25.873 } 00:09:25.873 EOF 00:09:25.873 )") 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:25.873 [2024-07-25 03:52:41.142762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.873 [2024-07-25 03:52:41.142811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:25.873 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:25.873 "params": { 00:09:25.873 "name": "Nvme1", 00:09:25.873 "trtype": "tcp", 00:09:25.873 "traddr": "10.0.0.2", 00:09:25.873 "adrfam": "ipv4", 00:09:25.873 "trsvcid": "4420", 00:09:25.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.873 "hdgst": false, 00:09:25.873 "ddgst": false 00:09:25.873 }, 00:09:25.873 "method": "bdev_nvme_attach_controller" 00:09:25.873 }' 00:09:25.873 [2024-07-25 03:52:41.150695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.873 [2024-07-25 03:52:41.150722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.873 [2024-07-25 03:52:41.158707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.873 [2024-07-25 03:52:41.158730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.873 [2024-07-25 03:52:41.166720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.873 [2024-07-25 03:52:41.166740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.174761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.174781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.179974] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:26.132 [2024-07-25 03:52:41.180048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743974 ] 00:09:26.132 [2024-07-25 03:52:41.182764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.182784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.190790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.190810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.198810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.198830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.206830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.206849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.132 [2024-07-25 03:52:41.214332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.132 [2024-07-25 03:52:41.214872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.214896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.222894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.222918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.230917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.230941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.238937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.238961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.244122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.132 [2024-07-25 03:52:41.246963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.246987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.255019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.255056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.263023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.263051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.271031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.271055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.279054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.279079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.287074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.287098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.132 [2024-07-25 03:52:41.295105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.132 [2024-07-25 03:52:41.295131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.303157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.303199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.311134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.311157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.319165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.319190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.327186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.327211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.335207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.335232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.339537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.133 [2024-07-25 03:52:41.343230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.343262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.351258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.351295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.359324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.359358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.367347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.367384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.375374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.375414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.383381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.383420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.391401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.391438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.399421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.399459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.407437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.407470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.415432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.415456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.423486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.423552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.133 [2024-07-25 03:52:41.431536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.133 [2024-07-25 03:52:41.431577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.439504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.439546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.447540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.447565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.455575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.455606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.463680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.463710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.471689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.471740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.479720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.479747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.487744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.487772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.495757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.495782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.503785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.503811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.511808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.511832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.519833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.519858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.527857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.527882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.535879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.535903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.543907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.543933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.551928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.551953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.559952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.559976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.567974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.567998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.575999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.576023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.584028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.584054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.592047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.592071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.600069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.600093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.608092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.608116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.616112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.616136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.624140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.624165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.632159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.632183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.640189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.640219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.648208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.648235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 Running I/O for 5 seconds... 00:09:26.392 [2024-07-25 03:52:41.656228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.656261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.671272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.671316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.392 [2024-07-25 03:52:41.682600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.392 [2024-07-25 03:52:41.682632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.693943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.693974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.705665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.705696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.717145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.717175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.728944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.728975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.740499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.740542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.751423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.751451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.764339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.764366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.775039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.775069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.786643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.786674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.797893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.797924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.809146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.809177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.820325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.820353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.831650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.831681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.843327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.843355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.855042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.855072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.866372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.866408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.877717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.877748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.888804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.888834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.902507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.902551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.913605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.913635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.925125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.925155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.936821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.936851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.651 [2024-07-25 03:52:41.949623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.651 [2024-07-25 03:52:41.949653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:41.960347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:41.960374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:41.971690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:41.971720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:41.983090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:41.983120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:41.993839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:41.993869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.004854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.004884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.015798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.015828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.027006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.027037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.038031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.038062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.050740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.050771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.909 [2024-07-25 03:52:42.060502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.909 [2024-07-25 03:52:42.060547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.071981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.072011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.083157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.083196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.094420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.094447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.106167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.106197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.117365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.117392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.130052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.130082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.139353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.139380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.153037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.153067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.163766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.163796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.175087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.175118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.186354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.186381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.197689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.197720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.910 [2024-07-25 03:52:42.209056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.910 [2024-07-25 03:52:42.209086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.222589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.222619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.233471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.233498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.244840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.244871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.256647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.256678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.267823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.267853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.278831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.278861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.290224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.290263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.301469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.301511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.314788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.314817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.325179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.325209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.337642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.337672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.349137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.349167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.360280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.360324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.371541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.371572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.382711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.382742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.393582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.393612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.404662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.404692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.416381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.416408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.427986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.428016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.439351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.168 [2024-07-25 03:52:42.439378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.168 [2024-07-25 03:52:42.453042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.169 [2024-07-25 03:52:42.453071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.169 [2024-07-25 03:52:42.463715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.169 [2024-07-25 03:52:42.463745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.474673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.474704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.486040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.486070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.497632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.497663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.509200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.509231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.520697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.520736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.532304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.532331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.543597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.543627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.556717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.556748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.567791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.567823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.579180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.579210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.590670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.590702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.601817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.601847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.613202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.613233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.624653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.624685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.637959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.637990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.648583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.648613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.659789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.659820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.672936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.672968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.683604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.427 [2024-07-25 03:52:42.683635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.427 [2024-07-25 03:52:42.695185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.428 [2024-07-25 03:52:42.695215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.428 [2024-07-25 03:52:42.706565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.428 [2024-07-25 03:52:42.706596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.428 [2024-07-25 03:52:42.719664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.428 [2024-07-25 03:52:42.719694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.729894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.729926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.741240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.741279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.752493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.752536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.764146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.764176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.775374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.775402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.786428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.786455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.797167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.797197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.808252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.808283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.819765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.819796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.831145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.831175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.844592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.844623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.855628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.855659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.866751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.866781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.880167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.880197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.891188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.891218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.902652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.902682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.913727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.913758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.925035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.925067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.938208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.938256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.948050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.948081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.959812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.959842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.970889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.970919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.686 [2024-07-25 03:52:42.982024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.686 [2024-07-25 03:52:42.982055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:42.995737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:42.995768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.006382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.006409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.017745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.017776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.028579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.028615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.039579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.039610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.050545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.050577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.061422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.061449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.072856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.072886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.083886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.083916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.096570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.096600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.106629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.106660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.118618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.118649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.130146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.130176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.141620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.141650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.152694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.152725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.165698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.165728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.175681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.175711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.187395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.187421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.198616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.198646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.209742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.209772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.220951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.220981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.234024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.945 [2024-07-25 03:52:43.234054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.945 [2024-07-25 03:52:43.244638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.206 [2024-07-25 03:52:43.244668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.206 [2024-07-25 03:52:43.256133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.206 [2024-07-25 03:52:43.256163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.206 [2024-07-25 03:52:43.267704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.206 [2024-07-25 03:52:43.267735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.206 [2024-07-25 03:52:43.280760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.206 [2024-07-25 03:52:43.280790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.206 [2024-07-25 03:52:43.291402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.291429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.302343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.302370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.314002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.314032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.325661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.325691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.336626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.336656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.347755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.347785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.361277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.361305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.372036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.372066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.383690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.383729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.394750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.394780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.406037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.406067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.417799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.417829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.431334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.431361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.441970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.442000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.453400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.453428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.466310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.466337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.476613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.476643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.487661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.487691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.207 [2024-07-25 03:52:43.501047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.207 [2024-07-25 03:52:43.501077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.511515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.511560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.522933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.522963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.536341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.536368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.547088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.547118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.558395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.558422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.571596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.571626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.582685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.582715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.594052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.594081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.605487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.605522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.617255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.617285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.628756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.628786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.639907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.639937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.651616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.651646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.663122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.663152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.674250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.674279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.685560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.685590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.698865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.698896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.709214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.709254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.720543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.720574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.733658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.733689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.744097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.744128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.466 [2024-07-25 03:52:43.755391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.466 [2024-07-25 03:52:43.755419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.768431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.768460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.778910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.778941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.790646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.790678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.802832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.802863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.814423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.814450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.828110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.828148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.839139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.839170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.850499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.850550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.861491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.861534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.872846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.872876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.884350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.884377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.895957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.895987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.908128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.723 [2024-07-25 03:52:43.908158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.723 [2024-07-25 03:52:43.919984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.920014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.931196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.931226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.942593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.942623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.954034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.954065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.965404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.965431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.977556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.977587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:43.988895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:43.988926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:44.002463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:44.002492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.724 [2024-07-25 03:52:44.012933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.724 [2024-07-25 03:52:44.012964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.024338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.024366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.035660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.035690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.047485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.047520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.059335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.059363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.072611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.072643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.083474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.083503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.094148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.094178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.105387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.105414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.116770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.116800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.128762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.128792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.140266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.140309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.153638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.153669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.164499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.164544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.175897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.175927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.187327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.187354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.198809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.198839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.210234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.210287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.221616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.221646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.235183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.235213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.245150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.981 [2024-07-25 03:52:44.245180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.981 [2024-07-25 03:52:44.256992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.982 [2024-07-25 03:52:44.257022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.982 [2024-07-25 03:52:44.268335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.982 [2024-07-25 03:52:44.268370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.982 [2024-07-25 03:52:44.279257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.982 [2024-07-25 03:52:44.279287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.290408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.290435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.301845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.301875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.313739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.313770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.325377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.325404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.338423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.338450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.349289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.349332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.360812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.360842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.374123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.374153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.384944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.384974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.396383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.396411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.407632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.407662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.420610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.420640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.430995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.431025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.442381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.442408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.455276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.455319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.464982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.465012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.476673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.476703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.487720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.487750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.498671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.498701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.511699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.511729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.521848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.521877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.240 [2024-07-25 03:52:44.533637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.240 [2024-07-25 03:52:44.533668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.544673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.544703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.556221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.556259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.567758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.567788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.582038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.582068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.593068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.593097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.604637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.604667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.616370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.616397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.627756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.627786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.641303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.641330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.498 [2024-07-25 03:52:44.652333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.498 [2024-07-25 03:52:44.652360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.664324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.664351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.675678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.675708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.689480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.689508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.700436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.700464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.711564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.711595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.722383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.722409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.733879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.733909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.745234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.745272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.756927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.756957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.768547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.768591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.781709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.781739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.499 [2024-07-25 03:52:44.792150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.499 [2024-07-25 03:52:44.792180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.803807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.803837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.815214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.815251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.826541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.826580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.837601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.837632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.850789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.850820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.861130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.861161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.872641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.872672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.883883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.883918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.895417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.895445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.906891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.906921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.920469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.920497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.931516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.931562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.943143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.943174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.956740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.956771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.967856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.967888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.979520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.979565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:44.990681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:44.990712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:45.002030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:45.002060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:45.013486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:45.013514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:45.024953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:45.024983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:45.036013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:45.036042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.757 [2024-07-25 03:52:45.047299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.757 [2024-07-25 03:52:45.047341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.059022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.059052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.070338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.070365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.081359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.081386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.092758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.092788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.103968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.103998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.114968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.114998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.128300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.128327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.139140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.139180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.150911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.150942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.162380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.162408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.173981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.174012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.185812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.185842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.197534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.197569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.209365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.209393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.220905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.220936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.234098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.234129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.244512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.244557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.256036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.256066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.267479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.267506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.281206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.281238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.291363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.291390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.303220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.303266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.016 [2024-07-25 03:52:45.314749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.016 [2024-07-25 03:52:45.314779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.326059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.326089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.337638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.337668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.349170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.349200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.360785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.360823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.374271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.374315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.384775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.384805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.395915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.395945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.408736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.408767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.418777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.418806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.430753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.430782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.441817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.441846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.452987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.453017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.464571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.464601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.475677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.475707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.489573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.489603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.500343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.500370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.511655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.511685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.525361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.525388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.536593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.536623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.547801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.547831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.559474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.559502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.275 [2024-07-25 03:52:45.571089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.275 [2024-07-25 03:52:45.571119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.584015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.584058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.594276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.594320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.606167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.606196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.617503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.617544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.628837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.628867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.639945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.639974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.651087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.651116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.662476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.662503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.676315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.676342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.687443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.687470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.698947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.698977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.710322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.710349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.721723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.721754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.733213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.733252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.744652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.744682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.756049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.756078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.767383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.767410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.778445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.778472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.789904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.789934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.801006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.801044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.812562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.812593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.534 [2024-07-25 03:52:45.822977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.534 [2024-07-25 03:52:45.823006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.792 [2024-07-25 03:52:45.834403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-07-25 03:52:45.834431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.792 [2024-07-25 03:52:45.847491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-07-25 03:52:45.847546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.792 [2024-07-25 03:52:45.858189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-07-25 03:52:45.858220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.869652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.869682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.883462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.883489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.894608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.894639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.906026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.906055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.917113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.917143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.928611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.928641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.940204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.940234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.953601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.953631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.964706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.964737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.975974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.976005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.988933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.988964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:45.998491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:45.998540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.010558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.010590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.023960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.024000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.034512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.034559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.046002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.046032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.057410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.057438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.069340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.069367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.793 [2024-07-25 03:52:46.080884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.793 [2024-07-25 03:52:46.080914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-07-25 03:52:46.092827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-07-25 03:52:46.092857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-07-25 03:52:46.103809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-07-25 03:52:46.103840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-07-25 03:52:46.115238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-07-25 03:52:46.115291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-07-25 03:52:46.126403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.126430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.141627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.141659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.152015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.152045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.163716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.163746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.174951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.174981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.187993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.188023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.198928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.198957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.211153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.211183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.222427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.222454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.235783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.235813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.246592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.246622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.258311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.258338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.269399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.269426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.280573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.280603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.291714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.291744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.303159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.303189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.314637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.314667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.326035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.326064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.337720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.337749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.052 [2024-07-25 03:52:46.349517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.052 [2024-07-25 03:52:46.349563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.361083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.361114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.373827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.373857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.383878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.383909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.395623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.395653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.406889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.406919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.420154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.420184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.430408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.430435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.442798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.442829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.454355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.454383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.465499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.465545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.476834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.476864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.490168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.490198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.500414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.500442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.512754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.512785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.523844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.523874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.535160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.535190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.546362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.546389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.559331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-07-25 03:52:46.559358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-07-25 03:52:46.569754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.311 [2024-07-25 03:52:46.569783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.311 [2024-07-25 03:52:46.581280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.311 [2024-07-25 03:52:46.581323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.311 [2024-07-25 03:52:46.591958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.311 [2024-07-25 03:52:46.591987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.311 [2024-07-25 03:52:46.603197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.311 [2024-07-25 03:52:46.603227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.614609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.614640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.625812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.625842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.637204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.637234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.648155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.648185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.659717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.659748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.670611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.670641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.678509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.678552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 00:09:31.569 Latency(us) 00:09:31.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.569 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:31.569 Nvme1n1 : 5.01 11203.15 87.52 0.00 0.00 11409.46 5242.88 23787.14 00:09:31.569 =================================================================================================================== 00:09:31.569 Total : 11203.15 87.52 0.00 0.00 11409.46 5242.88 23787.14 00:09:31.569 [2024-07-25 03:52:46.686556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.686585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.694563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.694591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.702635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.702685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.710662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.710712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.718678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.718726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.726695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.726742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.734723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.734788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.742747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.742817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.750785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.750837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.758792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.758842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.766809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.766859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.774855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.774910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.782877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.782931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.790885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.790935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.798921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.798984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.806932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.806978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.814939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.814984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.822939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.822966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.830990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.831034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.839020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.839066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.847040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.847090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.855040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.855076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-07-25 03:52:46.863034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-07-25 03:52:46.863061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.827 [2024-07-25 03:52:46.871124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-07-25 03:52:46.871172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-07-25 03:52:46.879133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-07-25 03:52:46.879179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-07-25 03:52:46.887136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-07-25 03:52:46.887172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-07-25 03:52:46.895130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-07-25 03:52:46.895155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-07-25 03:52:46.903154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-07-25 03:52:46.903178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (743974) - No such process 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 743974 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.828 delay0 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.828 03:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:31.828 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.828 [2024-07-25 03:52:47.028390] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:38.411 Initializing NVMe Controllers 00:09:38.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:38.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:38.411 Initialization complete. Launching workers. 00:09:38.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 899 00:09:38.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1186, failed to submit 33 00:09:38.411 success 992, unsuccess 194, failed 0 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.411 rmmod nvme_tcp 00:09:38.411 rmmod nvme_fabrics 00:09:38.411 rmmod nvme_keyring 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 742633 ']' 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 742633 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 742633 ']' 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 742633 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742633 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742633' 00:09:38.411 killing process with pid 742633 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 742633 00:09:38.411 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 742633 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.667 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.564 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.564 00:09:40.564 real 0m28.011s 00:09:40.564 user 0m41.398s 00:09:40.564 sys 0m8.384s 00:09:40.564 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.564 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.565 ************************************ 00:09:40.565 END TEST nvmf_zcopy 00:09:40.565 ************************************ 00:09:40.565 03:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:40.565 03:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.565 03:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.565 03:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.565 ************************************ 00:09:40.565 START TEST nvmf_nmic 00:09:40.565 ************************************ 00:09:40.565 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:40.823 * Looking for test storage... 00:09:40.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.823 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.824 03:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.724 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:42.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:42.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:42.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:42.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.725 03:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.725 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.725 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.725 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:09:42.725 00:09:42.725 --- 10.0.0.2 ping statistics --- 00:09:42.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.725 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:42.725 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:42.984 00:09:42.984 --- 10.0.0.1 ping statistics --- 00:09:42.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.984 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=747373 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 747373 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 747373 ']' 00:09:42.984 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.985 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.985 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.985 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.985 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.985 [2024-07-25 03:52:58.098774] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:42.985 [2024-07-25 03:52:58.098849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.985 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.985 [2024-07-25 03:52:58.136412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.985 [2024-07-25 03:52:58.162781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.985 [2024-07-25 03:52:58.250677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.985 [2024-07-25 03:52:58.250735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.985 [2024-07-25 03:52:58.250748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.985 [2024-07-25 03:52:58.250774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.985 [2024-07-25 03:52:58.250784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.985 [2024-07-25 03:52:58.250872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.985 [2024-07-25 03:52:58.250939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.985 [2024-07-25 03:52:58.251005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.985 [2024-07-25 03:52:58.251007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 [2024-07-25 03:52:58.408708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 Malloc0 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 [2024-07-25 03:52:58.462558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:43.243 test case1: single bdev can't be used in multiple subsystems 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 [2024-07-25 03:52:58.486347] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:43.244 [2024-07-25 03:52:58.486378] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:43.244 [2024-07-25 03:52:58.486394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 request: 00:09:43.244 { 00:09:43.244 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:43.244 "namespace": { 00:09:43.244 "bdev_name": "Malloc0", 00:09:43.244 "no_auto_visible": false 00:09:43.244 }, 00:09:43.244 "method": "nvmf_subsystem_add_ns", 00:09:43.244 "req_id": 1 00:09:43.244 } 00:09:43.244 Got JSON-RPC error response 00:09:43.244 response: 00:09:43.244 { 00:09:43.244 "code": -32602, 00:09:43.244 "message": "Invalid parameters" 00:09:43.244 } 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:43.244 Adding namespace failed - expected result. 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:43.244 test case2: host connect to nvmf target in multiple paths 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 [2024-07-25 03:52:58.498454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 03:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.177 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:44.743 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.743 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.743 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.743 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.743 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:46.640 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:46.640 [global] 00:09:46.640 thread=1 00:09:46.640 invalidate=1 00:09:46.640 rw=write 00:09:46.640 time_based=1 00:09:46.640 runtime=1 00:09:46.640 ioengine=libaio 00:09:46.640 direct=1 00:09:46.640 bs=4096 00:09:46.640 iodepth=1 00:09:46.640 norandommap=0 00:09:46.640 numjobs=1 00:09:46.640 00:09:46.640 verify_dump=1 00:09:46.640 verify_backlog=512 00:09:46.640 verify_state_save=0 00:09:46.640 do_verify=1 00:09:46.640 verify=crc32c-intel 00:09:46.640 [job0] 00:09:46.640 filename=/dev/nvme0n1 00:09:46.640 Could not set queue depth (nvme0n1) 00:09:46.898 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.898 fio-3.35 00:09:46.898 Starting 1 thread 00:09:47.831 00:09:47.831 job0: (groupid=0, jobs=1): err= 0: pid=747885: Thu Jul 25 03:53:03 2024 00:09:47.831 read: IOPS=1662, BW=6649KiB/s (6809kB/s)(6656KiB/1001msec) 00:09:47.831 slat (nsec): min=6149, max=57262, avg=14119.10, stdev=4990.77 00:09:47.831 clat (usec): min=240, max=832, avg=288.00, stdev=22.01 00:09:47.831 lat (usec): min=247, max=840, avg=302.12, stdev=24.47 00:09:47.831 clat percentiles (usec): 00:09:47.831 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 273], 00:09:47.831 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:09:47.831 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 314], 00:09:47.831 | 99.00th=[ 322], 99.50th=[ 326], 99.90th=[ 392], 99.95th=[ 832], 00:09:47.831 | 99.99th=[ 832] 00:09:47.831 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:47.831 slat (nsec): min=7444, max=64733, avg=18726.78, stdev=6104.76 00:09:47.831 clat (usec): min=162, max=1014, avg=215.44, stdev=31.02 00:09:47.831 lat (usec): min=170, max=1033, avg=234.17, stdev=33.46 00:09:47.831 clat percentiles (usec): 00:09:47.831 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 198], 00:09:47.831 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:09:47.831 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 260], 00:09:47.831 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 392], 99.95th=[ 404], 00:09:47.831 | 99.99th=[ 1012] 00:09:47.831 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:47.831 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:47.831 lat (usec) : 250=52.51%, 500=47.44%, 1000=0.03% 00:09:47.831 lat (msec) : 2=0.03% 00:09:47.831 cpu : usr=5.30%, sys=8.10%, ctx=3712, majf=0, minf=2 00:09:47.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.831 issued rwts: total=1664,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.831 00:09:47.831 Run status group 0 (all jobs): 00:09:47.831 READ: bw=6649KiB/s (6809kB/s), 6649KiB/s-6649KiB/s (6809kB/s-6809kB/s), io=6656KiB (6816kB), run=1001-1001msec 00:09:47.831 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:47.831 00:09:47.831 Disk stats (read/write): 00:09:47.831 nvme0n1: ios=1586/1752, merge=0/0, ticks=455/354, in_queue=809, util=91.98% 00:09:47.831 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.089 rmmod nvme_tcp 00:09:48.089 rmmod nvme_fabrics 00:09:48.089 rmmod nvme_keyring 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 747373 ']' 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 747373 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 747373 ']' 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 747373 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 747373 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 747373' 00:09:48.089 killing process with pid 747373 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 747373 00:09:48.089 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 747373 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.346 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.876 00:09:50.876 real 0m9.771s 00:09:50.876 user 0m22.093s 00:09:50.876 sys 0m2.379s 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.876 ************************************ 00:09:50.876 END TEST nvmf_nmic 00:09:50.876 ************************************ 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.876 ************************************ 00:09:50.876 START TEST nvmf_fio_target 00:09:50.876 ************************************ 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.876 * Looking for test storage... 00:09:50.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.876 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.877 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.778 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:52.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:52.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:52.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:52.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:09:52.779 00:09:52.779 --- 10.0.0.2 ping statistics --- 00:09:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.779 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:09:52.779 00:09:52.779 --- 10.0.0.1 ping statistics --- 00:09:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.779 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=750076 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 750076 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 750076 ']' 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.779 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.779 [2024-07-25 03:53:07.946798] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:52.779 [2024-07-25 03:53:07.946895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.779 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.779 [2024-07-25 03:53:07.986653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:52.779 [2024-07-25 03:53:08.013437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.038 [2024-07-25 03:53:08.104536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.038 [2024-07-25 03:53:08.104601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.038 [2024-07-25 03:53:08.104621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.038 [2024-07-25 03:53:08.104633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.038 [2024-07-25 03:53:08.104659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.038 [2024-07-25 03:53:08.104709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.038 [2024-07-25 03:53:08.104769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.038 [2024-07-25 03:53:08.104835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.038 [2024-07-25 03:53:08.104837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.038 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:53.295 [2024-07-25 03:53:08.485235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.295 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.553 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:53.553 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.811 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:53.811 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.069 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:54.069 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.327 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:54.327 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:54.584 03:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.842 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:54.842 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.099 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:55.099 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.357 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:55.357 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:55.955 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.955 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.955 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.212 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.212 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.469 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.726 [2024-07-25 03:53:11.914925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.726 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:56.982 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:57.239 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:57.805 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:00.327 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.327 [global] 00:10:00.327 thread=1 00:10:00.327 invalidate=1 00:10:00.327 rw=write 00:10:00.327 time_based=1 00:10:00.327 runtime=1 00:10:00.327 ioengine=libaio 00:10:00.327 direct=1 00:10:00.327 bs=4096 00:10:00.327 iodepth=1 00:10:00.327 norandommap=0 00:10:00.327 numjobs=1 00:10:00.328 00:10:00.328 verify_dump=1 00:10:00.328 verify_backlog=512 00:10:00.328 verify_state_save=0 00:10:00.328 do_verify=1 00:10:00.328 verify=crc32c-intel 00:10:00.328 [job0] 00:10:00.328 filename=/dev/nvme0n1 00:10:00.328 [job1] 00:10:00.328 filename=/dev/nvme0n2 00:10:00.328 [job2] 00:10:00.328 filename=/dev/nvme0n3 00:10:00.328 [job3] 00:10:00.328 filename=/dev/nvme0n4 00:10:00.328 Could not set queue depth (nvme0n1) 00:10:00.328 Could not set queue depth (nvme0n2) 00:10:00.328 Could not set queue depth (nvme0n3) 00:10:00.328 Could not set queue depth (nvme0n4) 00:10:00.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.328 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.328 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.328 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.328 fio-3.35 00:10:00.328 Starting 4 threads 00:10:01.259 00:10:01.259 job0: (groupid=0, jobs=1): err= 0: pid=751044: Thu Jul 25 03:53:16 2024 00:10:01.259 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:10:01.259 slat (nsec): min=15310, max=34584, avg=23252.39, stdev=8679.74 00:10:01.259 clat (usec): min=576, max=41369, avg=39217.32, stdev=8424.23 00:10:01.259 lat (usec): min=593, max=41402, avg=39240.57, stdev=8425.57 00:10:01.259 clat percentiles (usec): 00:10:01.259 | 1.00th=[ 578], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:01.259 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:01.259 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:01.259 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:01.259 | 99.99th=[41157] 00:10:01.259 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:10:01.259 slat (nsec): min=6366, max=41508, avg=12082.88, stdev=7825.74 00:10:01.259 clat (usec): min=179, max=920, avg=250.27, stdev=58.62 00:10:01.259 lat (usec): min=187, max=942, avg=262.36, stdev=59.78 00:10:01.259 clat percentiles (usec): 00:10:01.259 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:10:01.259 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 243], 00:10:01.259 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 334], 95.00th=[ 359], 00:10:01.259 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 922], 99.95th=[ 922], 00:10:01.259 | 99.99th=[ 922] 00:10:01.259 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.259 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.259 lat (usec) : 250=62.62%, 500=32.90%, 750=0.19%, 1000=0.19% 00:10:01.259 lat (msec) : 50=4.11% 00:10:01.259 cpu : usr=0.19%, sys=0.67%, ctx=537, majf=0, minf=1 00:10:01.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.259 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.259 job1: (groupid=0, jobs=1): err= 0: pid=751045: Thu Jul 25 03:53:16 2024 00:10:01.259 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:10:01.259 slat (nsec): min=14885, max=33141, avg=24049.90, stdev=8937.85 00:10:01.259 clat (usec): min=40787, max=42053, avg=41403.37, stdev=524.61 00:10:01.259 lat (usec): min=40805, max=42068, avg=41427.42, stdev=521.89 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:01.260 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:01.260 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:01.260 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:01.260 | 99.99th=[42206] 00:10:01.260 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:01.260 slat (nsec): min=6513, max=45830, avg=15167.41, stdev=8405.20 00:10:01.260 clat (usec): min=190, max=580, avg=336.77, stdev=78.26 00:10:01.260 lat (usec): min=199, max=589, avg=351.93, stdev=78.35 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[ 204], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 258], 00:10:01.260 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 334], 60.00th=[ 363], 00:10:01.260 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 457], 00:10:01.260 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 578], 00:10:01.260 | 99.99th=[ 578] 00:10:01.260 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.260 lat (usec) : 250=15.04%, 500=79.51%, 750=1.69% 00:10:01.260 lat (msec) : 50=3.76% 00:10:01.260 cpu : usr=0.50%, sys=0.59%, ctx=532, majf=0, minf=2 00:10:01.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.260 job2: (groupid=0, jobs=1): err= 0: pid=751046: Thu Jul 25 03:53:16 2024 00:10:01.260 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4108KiB/1005msec) 00:10:01.260 slat (nsec): min=5186, max=79004, avg=19909.45, stdev=11763.21 00:10:01.260 clat (usec): min=255, max=41006, avg=520.31, stdev=2197.14 00:10:01.260 lat (usec): min=264, max=41021, avg=540.21, stdev=2197.14 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[ 273], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 318], 00:10:01.260 | 30.00th=[ 326], 40.00th=[ 355], 50.00th=[ 383], 60.00th=[ 416], 00:10:01.260 | 70.00th=[ 445], 80.00th=[ 474], 90.00th=[ 515], 95.00th=[ 562], 00:10:01.260 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:10:01.260 | 99.99th=[41157] 00:10:01.260 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:10:01.260 slat (nsec): min=6195, max=77471, avg=14586.35, stdev=8352.44 00:10:01.260 clat (usec): min=168, max=551, avg=270.15, stdev=69.40 00:10:01.260 lat (usec): min=177, max=561, avg=284.74, stdev=71.50 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:10:01.260 | 30.00th=[ 215], 40.00th=[ 233], 50.00th=[ 253], 60.00th=[ 281], 00:10:01.260 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 412], 00:10:01.260 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 553], 00:10:01.260 | 99.99th=[ 553] 00:10:01.260 bw ( KiB/s): min= 5720, max= 6568, per=38.96%, avg=6144.00, stdev=599.63, samples=2 00:10:01.260 iops : min= 1430, max= 1642, avg=1536.00, stdev=149.91, samples=2 00:10:01.260 lat (usec) : 250=29.11%, 500=65.47%, 750=5.27% 00:10:01.260 lat (msec) : 10=0.04%, 50=0.12% 00:10:01.260 cpu : usr=2.29%, sys=4.48%, ctx=2563, majf=0, minf=1 00:10:01.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.260 job3: (groupid=0, jobs=1): err= 0: pid=751047: Thu Jul 25 03:53:16 2024 00:10:01.260 read: IOPS=1277, BW=5111KiB/s (5234kB/s)(5116KiB/1001msec) 00:10:01.260 slat (nsec): min=5450, max=72217, avg=19026.69, stdev=11553.77 00:10:01.260 clat (usec): min=259, max=40889, avg=431.02, stdev=1172.73 00:10:01.260 lat (usec): min=267, max=40895, avg=450.05, stdev=1172.82 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:10:01.260 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 396], 00:10:01.260 | 70.00th=[ 429], 80.00th=[ 461], 90.00th=[ 510], 95.00th=[ 570], 00:10:01.260 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[10945], 99.95th=[40633], 00:10:01.260 | 99.99th=[40633] 00:10:01.260 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:01.260 slat (nsec): min=6397, max=59416, avg=12576.99, stdev=6687.91 00:10:01.260 clat (usec): min=165, max=601, avg=255.26, stdev=92.63 00:10:01.260 lat (usec): min=173, max=616, avg=267.83, stdev=95.46 00:10:01.260 clat percentiles (usec): 00:10:01.260 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:10:01.260 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:10:01.260 | 70.00th=[ 245], 80.00th=[ 326], 90.00th=[ 420], 95.00th=[ 465], 00:10:01.260 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 603], 00:10:01.260 | 99.99th=[ 603] 00:10:01.260 bw ( KiB/s): min= 5824, max= 5824, per=36.93%, avg=5824.00, stdev= 0.00, samples=1 00:10:01.260 iops : min= 1456, max= 1456, avg=1456.00, stdev= 0.00, samples=1 00:10:01.260 lat (usec) : 250=38.69%, 500=54.71%, 750=6.54% 00:10:01.260 lat (msec) : 20=0.04%, 50=0.04% 00:10:01.260 cpu : usr=2.30%, sys=4.90%, ctx=2815, majf=0, minf=1 00:10:01.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.260 issued rwts: total=1279,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.260 00:10:01.260 Run status group 0 (all jobs): 00:10:01.260 READ: bw=9043KiB/s (9260kB/s), 79.1KiB/s-5111KiB/s (81.0kB/s-5234kB/s), io=9396KiB (9622kB), run=1001-1039msec 00:10:01.260 WRITE: bw=15.4MiB/s (16.1MB/s), 1971KiB/s-6138KiB/s (2018kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1039msec 00:10:01.260 00:10:01.260 Disk stats (read/write): 00:10:01.260 nvme0n1: ios=74/512, merge=0/0, ticks=986/127, in_queue=1113, util=87.37% 00:10:01.260 nvme0n2: ios=66/512, merge=0/0, ticks=746/164, in_queue=910, util=91.25% 00:10:01.260 nvme0n3: ios=1081/1312, merge=0/0, ticks=518/343, in_queue=861, util=95.09% 00:10:01.260 nvme0n4: ios=1081/1302, merge=0/0, ticks=519/337, in_queue=856, util=95.89% 00:10:01.260 03:53:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:01.260 [global] 00:10:01.260 thread=1 00:10:01.260 invalidate=1 00:10:01.260 rw=randwrite 00:10:01.260 time_based=1 00:10:01.260 runtime=1 00:10:01.260 ioengine=libaio 00:10:01.260 direct=1 00:10:01.260 bs=4096 00:10:01.260 iodepth=1 00:10:01.260 norandommap=0 00:10:01.260 numjobs=1 00:10:01.260 00:10:01.260 verify_dump=1 00:10:01.260 verify_backlog=512 00:10:01.260 verify_state_save=0 00:10:01.260 do_verify=1 00:10:01.260 verify=crc32c-intel 00:10:01.260 [job0] 00:10:01.260 filename=/dev/nvme0n1 00:10:01.260 [job1] 00:10:01.260 filename=/dev/nvme0n2 00:10:01.260 [job2] 00:10:01.260 filename=/dev/nvme0n3 00:10:01.260 [job3] 00:10:01.260 filename=/dev/nvme0n4 00:10:01.518 Could not set queue depth (nvme0n1) 00:10:01.518 Could not set queue depth (nvme0n2) 00:10:01.518 Could not set queue depth (nvme0n3) 00:10:01.518 Could not set queue depth (nvme0n4) 00:10:01.518 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.518 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.518 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.518 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.518 fio-3.35 00:10:01.518 Starting 4 threads 00:10:02.888 00:10:02.888 job0: (groupid=0, jobs=1): err= 0: pid=751391: Thu Jul 25 03:53:17 2024 00:10:02.888 read: IOPS=813, BW=3253KiB/s (3331kB/s)(3256KiB/1001msec) 00:10:02.888 slat (nsec): min=5945, max=48572, avg=13868.07, stdev=5937.41 00:10:02.888 clat (usec): min=253, max=41136, avg=870.01, stdev=4477.24 00:10:02.888 lat (usec): min=261, max=41144, avg=883.88, stdev=4477.55 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 318], 00:10:02.888 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:10:02.888 | 70.00th=[ 383], 80.00th=[ 424], 90.00th=[ 482], 95.00th=[ 523], 00:10:02.888 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:02.888 | 99.99th=[41157] 00:10:02.888 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:02.888 slat (nsec): min=7979, max=74745, avg=18998.76, stdev=10087.95 00:10:02.888 clat (usec): min=167, max=513, avg=246.59, stdev=68.83 00:10:02.888 lat (usec): min=177, max=533, avg=265.58, stdev=73.38 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:10:02.888 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:10:02.888 | 70.00th=[ 251], 80.00th=[ 314], 90.00th=[ 371], 95.00th=[ 388], 00:10:02.888 | 99.00th=[ 416], 99.50th=[ 420], 99.90th=[ 478], 99.95th=[ 515], 00:10:02.888 | 99.99th=[ 515] 00:10:02.888 bw ( KiB/s): min= 4087, max= 4087, per=28.82%, avg=4087.00, stdev= 0.00, samples=1 00:10:02.888 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:02.888 lat (usec) : 250=38.74%, 500=58.22%, 750=2.50% 00:10:02.888 lat (msec) : 50=0.54% 00:10:02.888 cpu : usr=1.60%, sys=4.30%, ctx=1839, majf=0, minf=1 00:10:02.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 issued rwts: total=814,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.888 job1: (groupid=0, jobs=1): err= 0: pid=751392: Thu Jul 25 03:53:17 2024 00:10:02.888 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:10:02.888 slat (nsec): min=13063, max=35039, avg=18645.48, stdev=6519.84 00:10:02.888 clat (usec): min=40564, max=42458, avg=41327.23, stdev=579.67 00:10:02.888 lat (usec): min=40579, max=42476, avg=41345.88, stdev=579.46 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:02.888 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:02.888 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:02.888 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:02.888 | 99.99th=[42206] 00:10:02.888 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:02.888 slat (nsec): min=8878, max=53985, avg=19642.68, stdev=7610.41 00:10:02.888 clat (usec): min=201, max=628, avg=253.42, stdev=26.47 00:10:02.888 lat (usec): min=211, max=654, avg=273.07, stdev=28.48 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 239], 00:10:02.888 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:10:02.888 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:02.888 | 99.00th=[ 322], 99.50th=[ 347], 99.90th=[ 627], 99.95th=[ 627], 00:10:02.888 | 99.99th=[ 627] 00:10:02.888 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.888 lat (usec) : 250=49.16%, 500=46.72%, 750=0.19% 00:10:02.888 lat (msec) : 50=3.94% 00:10:02.888 cpu : usr=1.39%, sys=0.59%, ctx=534, majf=0, minf=1 00:10:02.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.888 job2: (groupid=0, jobs=1): err= 0: pid=751393: Thu Jul 25 03:53:17 2024 00:10:02.888 read: IOPS=19, BW=79.9KiB/s (81.8kB/s)(80.0KiB/1001msec) 00:10:02.888 slat (nsec): min=13528, max=45459, avg=20148.15, stdev=9042.98 00:10:02.888 clat (usec): min=40850, max=41417, avg=40996.18, stdev=112.51 00:10:02.888 lat (usec): min=40869, max=41433, avg=41016.32, stdev=109.83 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:02.888 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:02.888 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:02.888 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:02.888 | 99.99th=[41157] 00:10:02.888 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:02.888 slat (nsec): min=8278, max=72714, avg=26254.43, stdev=11847.14 00:10:02.888 clat (usec): min=182, max=1248, avg=319.95, stdev=101.66 00:10:02.888 lat (usec): min=194, max=1270, avg=346.20, stdev=103.71 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 255], 00:10:02.888 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 334], 00:10:02.888 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 433], 00:10:02.888 | 99.00th=[ 635], 99.50th=[ 1012], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:02.888 | 99.99th=[ 1254] 00:10:02.888 bw ( KiB/s): min= 4087, max= 4087, per=28.82%, avg=4087.00, stdev= 0.00, samples=1 00:10:02.888 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:02.888 lat (usec) : 250=17.67%, 500=76.32%, 750=1.32%, 1000=0.38% 00:10:02.888 lat (msec) : 2=0.56%, 50=3.76% 00:10:02.888 cpu : usr=0.70%, sys=1.30%, ctx=533, majf=0, minf=2 00:10:02.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.888 job3: (groupid=0, jobs=1): err= 0: pid=751394: Thu Jul 25 03:53:17 2024 00:10:02.888 read: IOPS=1477, BW=5910KiB/s (6052kB/s)(5916KiB/1001msec) 00:10:02.888 slat (nsec): min=5722, max=66587, avg=11980.13, stdev=5519.57 00:10:02.888 clat (usec): min=279, max=42129, avg=410.38, stdev=1533.60 00:10:02.888 lat (usec): min=285, max=42136, avg=422.36, stdev=1533.47 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:10:02.888 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:10:02.888 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 429], 00:10:02.888 | 99.00th=[ 510], 99.50th=[ 586], 99.90th=[42206], 99.95th=[42206], 00:10:02.888 | 99.99th=[42206] 00:10:02.888 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:02.888 slat (nsec): min=7744, max=51838, avg=17503.86, stdev=7325.78 00:10:02.888 clat (usec): min=171, max=419, avg=218.71, stdev=32.86 00:10:02.888 lat (usec): min=181, max=441, avg=236.22, stdev=37.53 00:10:02.888 clat percentiles (usec): 00:10:02.888 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:10:02.888 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 225], 00:10:02.888 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 277], 00:10:02.888 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 420], 00:10:02.888 | 99.99th=[ 420] 00:10:02.888 bw ( KiB/s): min= 8192, max= 8192, per=57.77%, avg=8192.00, stdev= 0.00, samples=1 00:10:02.888 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:02.888 lat (usec) : 250=41.69%, 500=57.78%, 750=0.46% 00:10:02.888 lat (msec) : 50=0.07% 00:10:02.888 cpu : usr=2.70%, sys=6.60%, ctx=3016, majf=0, minf=1 00:10:02.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.888 issued rwts: total=1479,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.888 00:10:02.888 Run status group 0 (all jobs): 00:10:02.888 READ: bw=9234KiB/s (9456kB/s), 79.9KiB/s-5910KiB/s (81.8kB/s-6052kB/s), io=9336KiB (9560kB), run=1001-1011msec 00:10:02.888 WRITE: bw=13.8MiB/s (14.5MB/s), 2026KiB/s-6138KiB/s (2074kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1011msec 00:10:02.888 00:10:02.888 Disk stats (read/write): 00:10:02.888 nvme0n1: ios=555/822, merge=0/0, ticks=957/196, in_queue=1153, util=99.10% 00:10:02.888 nvme0n2: ios=50/512, merge=0/0, ticks=1255/115, in_queue=1370, util=100.00% 00:10:02.888 nvme0n3: ios=76/512, merge=0/0, ticks=1465/161, in_queue=1626, util=97.81% 00:10:02.888 nvme0n4: ios=1103/1536, merge=0/0, ticks=1411/314, in_queue=1725, util=98.22% 00:10:02.888 03:53:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:02.888 [global] 00:10:02.888 thread=1 00:10:02.888 invalidate=1 00:10:02.888 rw=write 00:10:02.888 time_based=1 00:10:02.888 runtime=1 00:10:02.888 ioengine=libaio 00:10:02.888 direct=1 00:10:02.888 bs=4096 00:10:02.888 iodepth=128 00:10:02.888 norandommap=0 00:10:02.888 numjobs=1 00:10:02.888 00:10:02.888 verify_dump=1 00:10:02.888 verify_backlog=512 00:10:02.888 verify_state_save=0 00:10:02.888 do_verify=1 00:10:02.888 verify=crc32c-intel 00:10:02.888 [job0] 00:10:02.888 filename=/dev/nvme0n1 00:10:02.888 [job1] 00:10:02.888 filename=/dev/nvme0n2 00:10:02.888 [job2] 00:10:02.888 filename=/dev/nvme0n3 00:10:02.888 [job3] 00:10:02.888 filename=/dev/nvme0n4 00:10:02.888 Could not set queue depth (nvme0n1) 00:10:02.888 Could not set queue depth (nvme0n2) 00:10:02.888 Could not set queue depth (nvme0n3) 00:10:02.888 Could not set queue depth (nvme0n4) 00:10:03.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.146 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.146 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.146 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.146 fio-3.35 00:10:03.146 Starting 4 threads 00:10:04.515 00:10:04.515 job0: (groupid=0, jobs=1): err= 0: pid=751624: Thu Jul 25 03:53:19 2024 00:10:04.515 read: IOPS=4725, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:10:04.515 slat (usec): min=3, max=6207, avg=99.38, stdev=549.42 00:10:04.515 clat (usec): min=973, max=23656, avg=12623.91, stdev=2062.30 00:10:04.515 lat (usec): min=5968, max=23675, avg=12723.29, stdev=2096.65 00:10:04.515 clat percentiles (usec): 00:10:04.515 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11338], 00:10:04.515 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:10:04.515 | 70.00th=[13173], 80.00th=[14484], 90.00th=[15270], 95.00th=[16057], 00:10:04.515 | 99.00th=[18482], 99.50th=[20317], 99.90th=[23725], 99.95th=[23725], 00:10:04.515 | 99.99th=[23725] 00:10:04.515 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:04.515 slat (usec): min=4, max=5808, avg=93.87, stdev=435.34 00:10:04.515 clat (usec): min=5250, max=27915, avg=13123.72, stdev=3480.24 00:10:04.515 lat (usec): min=5839, max=27942, avg=13217.59, stdev=3510.64 00:10:04.515 clat percentiles (usec): 00:10:04.515 | 1.00th=[ 7373], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11469], 00:10:04.515 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:10:04.515 | 70.00th=[12518], 80.00th=[13435], 90.00th=[17171], 95.00th=[23200], 00:10:04.515 | 99.00th=[23987], 99.50th=[24249], 99.90th=[27919], 99.95th=[27919], 00:10:04.515 | 99.99th=[27919] 00:10:04.515 bw ( KiB/s): min=19848, max=21112, per=27.22%, avg=20480.00, stdev=893.78, samples=2 00:10:04.515 iops : min= 4962, max= 5278, avg=5120.00, stdev=223.45, samples=2 00:10:04.515 lat (usec) : 1000=0.01% 00:10:04.515 lat (msec) : 10=6.70%, 20=88.41%, 50=4.88% 00:10:04.515 cpu : usr=5.99%, sys=10.29%, ctx=558, majf=0, minf=1 00:10:04.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:04.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.515 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.515 job1: (groupid=0, jobs=1): err= 0: pid=751625: Thu Jul 25 03:53:19 2024 00:10:04.515 read: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1005msec) 00:10:04.515 slat (usec): min=3, max=11334, avg=94.50, stdev=650.22 00:10:04.515 clat (usec): min=3134, max=23200, avg=12316.07, stdev=2877.50 00:10:04.515 lat (usec): min=4165, max=23236, avg=12410.57, stdev=2916.95 00:10:04.515 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 5997], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10421], 00:10:04.516 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:04.516 | 70.00th=[12780], 80.00th=[13960], 90.00th=[16712], 95.00th=[18744], 00:10:04.516 | 99.00th=[21103], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:10:04.516 | 99.99th=[23200] 00:10:04.516 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:04.516 slat (usec): min=3, max=9324, avg=75.01, stdev=424.06 00:10:04.516 clat (usec): min=1464, max=23122, avg=10524.99, stdev=2710.43 00:10:04.516 lat (usec): min=1488, max=23144, avg=10600.00, stdev=2729.19 00:10:04.516 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 7767], 00:10:04.516 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:10:04.516 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[13698], 00:10:04.516 | 99.00th=[16188], 99.50th=[16188], 99.90th=[21627], 99.95th=[21890], 00:10:04.516 | 99.99th=[23200] 00:10:04.516 bw ( KiB/s): min=22352, max=22704, per=29.94%, avg=22528.00, stdev=248.90, samples=2 00:10:04.516 iops : min= 5588, max= 5676, avg=5632.00, stdev=62.23, samples=2 00:10:04.516 lat (msec) : 2=0.17%, 4=0.71%, 10=21.62%, 20=75.99%, 50=1.51% 00:10:04.516 cpu : usr=8.17%, sys=9.16%, ctx=583, majf=0, minf=1 00:10:04.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:04.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.516 issued rwts: total=5532,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.516 job2: (groupid=0, jobs=1): err= 0: pid=751626: Thu Jul 25 03:53:19 2024 00:10:04.516 read: IOPS=4419, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1007msec) 00:10:04.516 slat (usec): min=3, max=13140, avg=118.48, stdev=825.87 00:10:04.516 clat (usec): min=2712, max=28203, avg=14956.18, stdev=3717.78 00:10:04.516 lat (usec): min=5088, max=28209, avg=15074.66, stdev=3766.27 00:10:04.516 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 6783], 5.00th=[10945], 10.00th=[11469], 20.00th=[12125], 00:10:04.516 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14222], 60.00th=[14746], 00:10:04.516 | 70.00th=[15270], 80.00th=[17171], 90.00th=[20055], 95.00th=[23200], 00:10:04.516 | 99.00th=[26084], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:10:04.516 | 99.99th=[28181] 00:10:04.516 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:04.516 slat (usec): min=4, max=12730, avg=92.85, stdev=515.82 00:10:04.516 clat (usec): min=1482, max=28269, avg=13228.22, stdev=3285.94 00:10:04.516 lat (usec): min=1495, max=28280, avg=13321.07, stdev=3313.41 00:10:04.516 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 4948], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 9896], 00:10:04.516 | 30.00th=[12649], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:10:04.516 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15926], 95.00th=[16909], 00:10:04.516 | 99.00th=[20317], 99.50th=[20579], 99.90th=[27657], 99.95th=[28181], 00:10:04.516 | 99.99th=[28181] 00:10:04.516 bw ( KiB/s): min=17544, max=19320, per=24.49%, avg=18432.00, stdev=1255.82, samples=2 00:10:04.516 iops : min= 4386, max= 4830, avg=4608.00, stdev=313.96, samples=2 00:10:04.516 lat (msec) : 2=0.02%, 4=0.30%, 10=12.14%, 20=81.70%, 50=5.84% 00:10:04.516 cpu : usr=5.67%, sys=9.54%, ctx=492, majf=0, minf=1 00:10:04.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:04.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.516 issued rwts: total=4450,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.516 job3: (groupid=0, jobs=1): err= 0: pid=751627: Thu Jul 25 03:53:19 2024 00:10:04.516 read: IOPS=3094, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1003msec) 00:10:04.516 slat (usec): min=3, max=17560, avg=153.66, stdev=993.46 00:10:04.516 clat (usec): min=867, max=44072, avg=19000.50, stdev=6611.82 00:10:04.516 lat (usec): min=4731, max=44085, avg=19154.17, stdev=6643.01 00:10:04.516 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 6194], 5.00th=[11600], 10.00th=[15008], 20.00th=[15270], 00:10:04.516 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16909], 00:10:04.516 | 70.00th=[21103], 80.00th=[23462], 90.00th=[27919], 95.00th=[32900], 00:10:04.516 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:10:04.516 | 99.99th=[44303] 00:10:04.516 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:10:04.516 slat (usec): min=3, max=14166, avg=133.66, stdev=627.58 00:10:04.516 clat (usec): min=1513, max=56586, avg=19038.92, stdev=9315.37 00:10:04.516 lat (usec): min=1525, max=56598, avg=19172.58, stdev=9387.19 00:10:04.516 clat percentiles (usec): 00:10:04.516 | 1.00th=[ 4752], 5.00th=[ 8455], 10.00th=[11469], 20.00th=[15139], 00:10:04.516 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16909], 00:10:04.516 | 70.00th=[19006], 80.00th=[21103], 90.00th=[29754], 95.00th=[43779], 00:10:04.516 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:10:04.516 | 99.99th=[56361] 00:10:04.516 bw ( KiB/s): min=11520, max=16384, per=18.54%, avg=13952.00, stdev=3439.37, samples=2 00:10:04.516 iops : min= 2880, max= 4096, avg=3488.00, stdev=859.84, samples=2 00:10:04.516 lat (usec) : 1000=0.01% 00:10:04.516 lat (msec) : 2=0.03%, 4=0.30%, 10=5.07%, 20=67.58%, 50=25.66% 00:10:04.516 lat (msec) : 100=1.35% 00:10:04.516 cpu : usr=3.99%, sys=6.89%, ctx=459, majf=0, minf=1 00:10:04.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:04.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.516 issued rwts: total=3104,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.516 00:10:04.516 Run status group 0 (all jobs): 00:10:04.516 READ: bw=69.1MiB/s (72.5MB/s), 12.1MiB/s-21.5MiB/s (12.7MB/s-22.5MB/s), io=69.6MiB (73.0MB), run=1002-1007msec 00:10:04.516 WRITE: bw=73.5MiB/s (77.1MB/s), 14.0MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=74.0MiB (77.6MB), run=1002-1007msec 00:10:04.516 00:10:04.516 Disk stats (read/write): 00:10:04.516 nvme0n1: ios=4146/4463, merge=0/0, ticks=24806/26343, in_queue=51149, util=86.87% 00:10:04.516 nvme0n2: ios=4632/4727, merge=0/0, ticks=55150/48366, in_queue=103516, util=87.09% 00:10:04.516 nvme0n3: ios=3624/3858, merge=0/0, ticks=54000/50901, in_queue=104901, util=98.85% 00:10:04.516 nvme0n4: ios=3072/3119, merge=0/0, ticks=48530/48314, in_queue=96844, util=89.67% 00:10:04.516 03:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:04.516 [global] 00:10:04.516 thread=1 00:10:04.516 invalidate=1 00:10:04.516 rw=randwrite 00:10:04.516 time_based=1 00:10:04.516 runtime=1 00:10:04.516 ioengine=libaio 00:10:04.516 direct=1 00:10:04.516 bs=4096 00:10:04.516 iodepth=128 00:10:04.516 norandommap=0 00:10:04.516 numjobs=1 00:10:04.516 00:10:04.516 verify_dump=1 00:10:04.516 verify_backlog=512 00:10:04.516 verify_state_save=0 00:10:04.516 do_verify=1 00:10:04.516 verify=crc32c-intel 00:10:04.516 [job0] 00:10:04.516 filename=/dev/nvme0n1 00:10:04.516 [job1] 00:10:04.516 filename=/dev/nvme0n2 00:10:04.516 [job2] 00:10:04.516 filename=/dev/nvme0n3 00:10:04.516 [job3] 00:10:04.516 filename=/dev/nvme0n4 00:10:04.516 Could not set queue depth (nvme0n1) 00:10:04.516 Could not set queue depth (nvme0n2) 00:10:04.516 Could not set queue depth (nvme0n3) 00:10:04.516 Could not set queue depth (nvme0n4) 00:10:04.516 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.516 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.516 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.516 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.516 fio-3.35 00:10:04.516 Starting 4 threads 00:10:05.887 00:10:05.887 job0: (groupid=0, jobs=1): err= 0: pid=751855: Thu Jul 25 03:53:20 2024 00:10:05.887 read: IOPS=5097, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:05.887 slat (usec): min=2, max=11579, avg=92.65, stdev=576.05 00:10:05.887 clat (usec): min=1117, max=33354, avg=12466.53, stdev=3086.26 00:10:05.887 lat (usec): min=4046, max=33369, avg=12559.18, stdev=3119.27 00:10:05.887 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 6783], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10683], 00:10:05.888 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:10:05.888 | 70.00th=[12387], 80.00th=[12911], 90.00th=[16581], 95.00th=[19530], 00:10:05.888 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:05.888 | 99.99th=[33424] 00:10:05.888 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:05.888 slat (usec): min=3, max=30250, avg=91.58, stdev=673.85 00:10:05.888 clat (usec): min=6732, max=45400, avg=12338.98, stdev=4469.17 00:10:05.888 lat (usec): min=6739, max=45414, avg=12430.56, stdev=4507.89 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 7570], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:10:05.888 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:10:05.888 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13042], 95.00th=[14746], 00:10:05.888 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:10:05.888 | 99.99th=[45351] 00:10:05.888 bw ( KiB/s): min=20480, max=20480, per=30.14%, avg=20480.00, stdev= 0.00, samples=2 00:10:05.888 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:05.888 lat (msec) : 2=0.01%, 10=8.58%, 20=87.77%, 50=3.65% 00:10:05.888 cpu : usr=8.48%, sys=10.38%, ctx=300, majf=0, minf=1 00:10:05.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:05.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.888 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.888 job1: (groupid=0, jobs=1): err= 0: pid=751856: Thu Jul 25 03:53:20 2024 00:10:05.888 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:10:05.888 slat (usec): min=2, max=26989, avg=168.06, stdev=1221.90 00:10:05.888 clat (msec): min=6, max=112, avg=21.03, stdev=18.34 00:10:05.888 lat (msec): min=6, max=112, avg=21.20, stdev=18.49 00:10:05.888 clat percentiles (msec): 00:10:05.888 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:10:05.888 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:10:05.888 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 43], 95.00th=[ 74], 00:10:05.888 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 110], 99.95th=[ 111], 00:10:05.888 | 99.99th=[ 112] 00:10:05.888 write: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(12.4MiB/1013msec); 0 zone resets 00:10:05.888 slat (usec): min=3, max=20251, avg=142.61, stdev=866.55 00:10:05.888 clat (usec): min=6024, max=88889, avg=20076.15, stdev=14550.09 00:10:05.888 lat (usec): min=6029, max=88923, avg=20218.76, stdev=14644.54 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 7373], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[12518], 00:10:05.888 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[15401], 00:10:05.888 | 70.00th=[19530], 80.00th=[22676], 90.00th=[41157], 95.00th=[59507], 00:10:05.888 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[87557], 00:10:05.888 | 99.99th=[88605] 00:10:05.888 bw ( KiB/s): min= 6912, max=17664, per=18.09%, avg=12288.00, stdev=7602.81, samples=2 00:10:05.888 iops : min= 1728, max= 4416, avg=3072.00, stdev=1900.70, samples=2 00:10:05.888 lat (msec) : 10=5.34%, 20=70.02%, 50=18.01%, 100=6.54%, 250=0.08% 00:10:05.888 cpu : usr=3.75%, sys=6.72%, ctx=312, majf=0, minf=1 00:10:05.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:05.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.888 issued rwts: total=3072,3162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.888 job2: (groupid=0, jobs=1): err= 0: pid=751857: Thu Jul 25 03:53:20 2024 00:10:05.888 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1012msec) 00:10:05.888 slat (usec): min=2, max=11987, avg=122.67, stdev=832.32 00:10:05.888 clat (usec): min=5647, max=38923, avg=16151.21, stdev=5269.30 00:10:05.888 lat (usec): min=5653, max=38931, avg=16273.88, stdev=5322.51 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[12125], 20.00th=[12649], 00:10:05.888 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14222], 60.00th=[15401], 00:10:05.888 | 70.00th=[17433], 80.00th=[19006], 90.00th=[22676], 95.00th=[28443], 00:10:05.888 | 99.00th=[33162], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:10:05.888 | 99.99th=[39060] 00:10:05.888 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:10:05.888 slat (usec): min=3, max=12609, avg=119.17, stdev=670.73 00:10:05.888 clat (usec): min=1225, max=39666, avg=16752.63, stdev=6849.06 00:10:05.888 lat (usec): min=1234, max=39675, avg=16871.80, stdev=6883.82 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 5145], 5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[11207], 00:10:05.888 | 30.00th=[12256], 40.00th=[13960], 50.00th=[14615], 60.00th=[16057], 00:10:05.888 | 70.00th=[20579], 80.00th=[23200], 90.00th=[27657], 95.00th=[29492], 00:10:05.888 | 99.00th=[33162], 99.50th=[34341], 99.90th=[39584], 99.95th=[39584], 00:10:05.888 | 99.99th=[39584] 00:10:05.888 bw ( KiB/s): min=16312, max=16384, per=24.06%, avg=16348.00, stdev=50.91, samples=2 00:10:05.888 iops : min= 4078, max= 4096, avg=4087.00, stdev=12.73, samples=2 00:10:05.888 lat (msec) : 2=0.14%, 4=0.26%, 10=8.80%, 20=66.35%, 50=24.45% 00:10:05.888 cpu : usr=3.76%, sys=5.34%, ctx=444, majf=0, minf=1 00:10:05.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:05.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.888 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.888 job3: (groupid=0, jobs=1): err= 0: pid=751860: Thu Jul 25 03:53:20 2024 00:10:05.888 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:10:05.888 slat (usec): min=2, max=26244, avg=111.28, stdev=879.65 00:10:05.888 clat (usec): min=4657, max=55798, avg=15029.02, stdev=6145.53 00:10:05.888 lat (usec): min=4682, max=55809, avg=15140.30, stdev=6205.20 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11469], 00:10:05.888 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13304], 60.00th=[13960], 00:10:05.888 | 70.00th=[15139], 80.00th=[16909], 90.00th=[21103], 95.00th=[26084], 00:10:05.888 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:10:05.888 | 99.99th=[55837] 00:10:05.888 write: IOPS=4785, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1009msec); 0 zone resets 00:10:05.888 slat (usec): min=3, max=12018, avg=85.52, stdev=620.37 00:10:05.888 clat (usec): min=1432, max=25808, avg=12158.20, stdev=2760.32 00:10:05.888 lat (usec): min=1443, max=25826, avg=12243.73, stdev=2810.42 00:10:05.888 clat percentiles (usec): 00:10:05.888 | 1.00th=[ 4948], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 9896], 00:10:05.888 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12387], 60.00th=[12911], 00:10:05.888 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14615], 95.00th=[16712], 00:10:05.888 | 99.00th=[18220], 99.50th=[19006], 99.90th=[24773], 99.95th=[25822], 00:10:05.888 | 99.99th=[25822] 00:10:05.888 bw ( KiB/s): min=17720, max=19888, per=27.68%, avg=18804.00, stdev=1533.01, samples=2 00:10:05.888 iops : min= 4430, max= 4972, avg=4701.00, stdev=383.25, samples=2 00:10:05.888 lat (msec) : 2=0.11%, 4=0.07%, 10=13.35%, 20=80.41%, 50=6.05% 00:10:05.888 lat (msec) : 100=0.01% 00:10:05.888 cpu : usr=7.44%, sys=10.32%, ctx=342, majf=0, minf=1 00:10:05.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:05.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.888 issued rwts: total=4608,4829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.888 00:10:05.888 Run status group 0 (all jobs): 00:10:05.888 READ: bw=63.6MiB/s (66.7MB/s), 11.8MiB/s-19.9MiB/s (12.4MB/s-20.9MB/s), io=64.4MiB (67.6MB), run=1003-1013msec 00:10:05.888 WRITE: bw=66.4MiB/s (69.6MB/s), 12.2MiB/s-19.9MiB/s (12.8MB/s-20.9MB/s), io=67.2MiB (70.5MB), run=1003-1013msec 00:10:05.888 00:10:05.888 Disk stats (read/write): 00:10:05.888 nvme0n1: ios=4115/4418, merge=0/0, ticks=24471/21686, in_queue=46157, util=89.78% 00:10:05.888 nvme0n2: ios=2582/2855, merge=0/0, ticks=20952/15222, in_queue=36174, util=97.36% 00:10:05.888 nvme0n3: ios=3129/3584, merge=0/0, ticks=47266/54972, in_queue=102238, util=93.64% 00:10:05.888 nvme0n4: ios=3785/4096, merge=0/0, ticks=45786/41989, in_queue=87775, util=98.21% 00:10:05.888 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:05.888 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=751996 00:10:05.888 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:05.888 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:05.888 [global] 00:10:05.888 thread=1 00:10:05.888 invalidate=1 00:10:05.888 rw=read 00:10:05.888 time_based=1 00:10:05.888 runtime=10 00:10:05.888 ioengine=libaio 00:10:05.888 direct=1 00:10:05.888 bs=4096 00:10:05.888 iodepth=1 00:10:05.888 norandommap=1 00:10:05.888 numjobs=1 00:10:05.888 00:10:05.888 [job0] 00:10:05.888 filename=/dev/nvme0n1 00:10:05.888 [job1] 00:10:05.888 filename=/dev/nvme0n2 00:10:05.888 [job2] 00:10:05.888 filename=/dev/nvme0n3 00:10:05.888 [job3] 00:10:05.888 filename=/dev/nvme0n4 00:10:05.888 Could not set queue depth (nvme0n1) 00:10:05.888 Could not set queue depth (nvme0n2) 00:10:05.888 Could not set queue depth (nvme0n3) 00:10:05.888 Could not set queue depth (nvme0n4) 00:10:05.888 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.888 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.888 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.889 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.889 fio-3.35 00:10:05.889 Starting 4 threads 00:10:09.163 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:09.163 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:09.163 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8675328, buflen=4096 00:10:09.163 fio: pid=752182, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.420 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.420 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:09.420 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=37539840, buflen=4096 00:10:09.420 fio: pid=752170, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.676 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=35774464, buflen=4096 00:10:09.676 fio: pid=752120, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:09.676 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.676 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:09.934 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.934 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:09.934 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7913472, buflen=4096 00:10:09.934 fio: pid=752139, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.934 00:10:09.934 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=752120: Thu Jul 25 03:53:25 2024 00:10:09.934 read: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(34.1MiB/3450msec) 00:10:09.934 slat (usec): min=5, max=15849, avg=14.86, stdev=250.74 00:10:09.934 clat (usec): min=247, max=41065, avg=377.32, stdev=868.53 00:10:09.934 lat (usec): min=253, max=41072, avg=391.34, stdev=900.74 00:10:09.934 clat percentiles (usec): 00:10:09.934 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 338], 00:10:09.934 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:10:09.934 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 404], 00:10:09.934 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 832], 99.95th=[ 1975], 00:10:09.934 | 99.99th=[41157] 00:10:09.934 bw ( KiB/s): min= 9520, max=11968, per=44.88%, avg=10552.00, stdev=839.69, samples=6 00:10:09.934 iops : min= 2380, max= 2992, avg=2638.00, stdev=209.92, samples=6 00:10:09.934 lat (usec) : 250=0.02%, 500=99.46%, 750=0.39%, 1000=0.03% 00:10:09.934 lat (msec) : 2=0.03%, 50=0.05% 00:10:09.934 cpu : usr=1.86%, sys=4.41%, ctx=8737, majf=0, minf=1 00:10:09.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 issued rwts: total=8735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.934 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=752139: Thu Jul 25 03:53:25 2024 00:10:09.934 read: IOPS=517, BW=2070KiB/s (2119kB/s)(7728KiB/3734msec) 00:10:09.934 slat (usec): min=5, max=26564, avg=47.48, stdev=891.78 00:10:09.934 clat (usec): min=248, max=43028, avg=1871.21, stdev=7705.94 00:10:09.934 lat (usec): min=254, max=60997, avg=1918.70, stdev=7805.82 00:10:09.934 clat percentiles (usec): 00:10:09.934 | 1.00th=[ 277], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 338], 00:10:09.934 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:10:09.934 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 408], 95.00th=[ 437], 00:10:09.934 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[43254], 00:10:09.934 | 99.99th=[43254] 00:10:09.934 bw ( KiB/s): min= 96, max= 7811, per=8.47%, avg=1992.43, stdev=3295.73, samples=7 00:10:09.934 iops : min= 24, max= 1952, avg=498.00, stdev=823.71, samples=7 00:10:09.934 lat (usec) : 250=0.05%, 500=95.96%, 750=0.10% 00:10:09.934 lat (msec) : 2=0.10%, 50=3.72% 00:10:09.934 cpu : usr=0.27%, sys=0.70%, ctx=1938, majf=0, minf=1 00:10:09.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.934 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=752170: Thu Jul 25 03:53:25 2024 00:10:09.934 read: IOPS=2876, BW=11.2MiB/s (11.8MB/s)(35.8MiB/3186msec) 00:10:09.934 slat (nsec): min=5228, max=59216, avg=10989.61, stdev=5312.01 00:10:09.934 clat (usec): min=255, max=40999, avg=331.16, stdev=1038.42 00:10:09.934 lat (usec): min=260, max=41013, avg=342.15, stdev=1038.57 00:10:09.934 clat percentiles (usec): 00:10:09.934 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:10:09.934 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:10:09.934 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 330], 95.00th=[ 338], 00:10:09.934 | 99.00th=[ 392], 99.50th=[ 437], 99.90th=[ 611], 99.95th=[41157], 00:10:09.934 | 99.99th=[41157] 00:10:09.934 bw ( KiB/s): min=10816, max=13024, per=51.95%, avg=12214.67, stdev=763.76, samples=6 00:10:09.934 iops : min= 2704, max= 3256, avg=3053.67, stdev=190.94, samples=6 00:10:09.934 lat (usec) : 500=99.75%, 750=0.15%, 1000=0.01% 00:10:09.934 lat (msec) : 2=0.01%, 50=0.07% 00:10:09.934 cpu : usr=2.29%, sys=4.96%, ctx=9166, majf=0, minf=1 00:10:09.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 issued rwts: total=9166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.934 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=752182: Thu Jul 25 03:53:25 2024 00:10:09.934 read: IOPS=726, BW=2904KiB/s (2974kB/s)(8472KiB/2917msec) 00:10:09.934 slat (nsec): min=5396, max=62320, avg=11102.79, stdev=5152.74 00:10:09.934 clat (usec): min=272, max=42081, avg=1350.15, stdev=6285.98 00:10:09.934 lat (usec): min=286, max=42111, avg=1361.24, stdev=6287.10 00:10:09.934 clat percentiles (usec): 00:10:09.934 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 322], 00:10:09.934 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:10:09.934 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 469], 00:10:09.934 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.934 | 99.99th=[42206] 00:10:09.934 bw ( KiB/s): min= 96, max=10456, per=9.52%, avg=2238.40, stdev=4596.31, samples=5 00:10:09.934 iops : min= 24, max= 2614, avg=559.60, stdev=1149.08, samples=5 00:10:09.934 lat (usec) : 500=95.89%, 750=1.65% 00:10:09.934 lat (msec) : 50=2.41% 00:10:09.934 cpu : usr=0.45%, sys=1.30%, ctx=2119, majf=0, minf=1 00:10:09.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.935 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.935 00:10:09.935 Run status group 0 (all jobs): 00:10:09.935 READ: bw=23.0MiB/s (24.1MB/s), 2070KiB/s-11.2MiB/s (2119kB/s-11.8MB/s), io=85.7MiB (89.9MB), run=2917-3734msec 00:10:09.935 00:10:09.935 Disk stats (read/write): 00:10:09.935 nvme0n1: ios=8749/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.88% 00:10:09.935 nvme0n2: ios=1929/0, merge=0/0, ticks=3475/0, in_queue=3475, util=94.45% 00:10:09.935 nvme0n3: ios=9163/0, merge=0/0, ticks=2721/0, in_queue=2721, util=96.69% 00:10:09.935 nvme0n4: ios=2085/0, merge=0/0, ticks=2782/0, in_queue=2782, util=96.71% 00:10:10.192 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.192 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:10.449 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.449 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:10.706 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.707 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:10.964 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.964 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 751996 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:11.222 nvmf hotplug test: fio failed as expected 00:10:11.222 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.479 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.479 rmmod nvme_tcp 00:10:11.479 rmmod nvme_fabrics 00:10:11.479 rmmod nvme_keyring 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 750076 ']' 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 750076 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 750076 ']' 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 750076 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750076 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750076' 00:10:11.737 killing process with pid 750076 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 750076 00:10:11.737 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 750076 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.995 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.896 00:10:13.896 real 0m23.419s 00:10:13.896 user 1m21.336s 00:10:13.896 sys 0m7.503s 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.896 ************************************ 00:10:13.896 END TEST nvmf_fio_target 00:10:13.896 ************************************ 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.896 ************************************ 00:10:13.896 START TEST nvmf_bdevio 00:10:13.896 ************************************ 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:13.896 * Looking for test storage... 00:10:13.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.896 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.154 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.155 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.055 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.055 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.055 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:16.056 00:10:16.056 --- 10.0.0.2 ping statistics --- 00:10:16.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.056 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:10:16.056 00:10:16.056 --- 10.0.0.1 ping statistics --- 00:10:16.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.056 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=754720 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 754720 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 754720 ']' 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.056 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.056 [2024-07-25 03:53:31.323768] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:16.056 [2024-07-25 03:53:31.323844] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.314 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.314 [2024-07-25 03:53:31.363925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.314 [2024-07-25 03:53:31.392392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.314 [2024-07-25 03:53:31.477444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.314 [2024-07-25 03:53:31.477495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.314 [2024-07-25 03:53:31.477508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.314 [2024-07-25 03:53:31.477520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.314 [2024-07-25 03:53:31.477529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.314 [2024-07-25 03:53:31.477598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.314 [2024-07-25 03:53:31.477689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.314 [2024-07-25 03:53:31.477758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.314 [2024-07-25 03:53:31.478103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.314 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.572 [2024-07-25 03:53:31.615417] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.572 Malloc0 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.572 [2024-07-25 03:53:31.666205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:16.572 { 00:10:16.572 "params": { 00:10:16.572 "name": "Nvme$subsystem", 00:10:16.572 "trtype": "$TEST_TRANSPORT", 00:10:16.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.572 "adrfam": "ipv4", 00:10:16.572 "trsvcid": "$NVMF_PORT", 00:10:16.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.572 "hdgst": ${hdgst:-false}, 00:10:16.572 "ddgst": ${ddgst:-false} 00:10:16.572 }, 00:10:16.572 "method": "bdev_nvme_attach_controller" 00:10:16.572 } 00:10:16.572 EOF 00:10:16.572 )") 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:16.572 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:16.573 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:16.573 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:16.573 "params": { 00:10:16.573 "name": "Nvme1", 00:10:16.573 "trtype": "tcp", 00:10:16.573 "traddr": "10.0.0.2", 00:10:16.573 "adrfam": "ipv4", 00:10:16.573 "trsvcid": "4420", 00:10:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.573 "hdgst": false, 00:10:16.573 "ddgst": false 00:10:16.573 }, 00:10:16.573 "method": "bdev_nvme_attach_controller" 00:10:16.573 }' 00:10:16.573 [2024-07-25 03:53:31.714404] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:16.573 [2024-07-25 03:53:31.714479] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754872 ] 00:10:16.573 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.573 [2024-07-25 03:53:31.745819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.573 [2024-07-25 03:53:31.775001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.573 [2024-07-25 03:53:31.867485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.573 [2024-07-25 03:53:31.867538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.573 [2024-07-25 03:53:31.867541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.831 I/O targets: 00:10:16.831 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.831 00:10:16.831 00:10:16.831 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.831 http://cunit.sourceforge.net/ 00:10:16.831 00:10:16.831 00:10:16.831 Suite: bdevio tests on: Nvme1n1 00:10:17.089 Test: blockdev write read block ...passed 00:10:17.089 Test: blockdev write zeroes read block ...passed 00:10:17.089 Test: blockdev write zeroes read no split ...passed 00:10:17.089 Test: blockdev write zeroes read split ...passed 00:10:17.089 Test: blockdev write zeroes read split partial ...passed 00:10:17.089 Test: blockdev reset ...[2024-07-25 03:53:32.249793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:17.089 [2024-07-25 03:53:32.249906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf940 (9): Bad file descriptor 00:10:17.346 [2024-07-25 03:53:32.398459] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:17.346 passed 00:10:17.346 Test: blockdev write read 8 blocks ...passed 00:10:17.346 Test: blockdev write read size > 128k ...passed 00:10:17.346 Test: blockdev write read invalid size ...passed 00:10:17.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.346 Test: blockdev write read max offset ...passed 00:10:17.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.346 Test: blockdev writev readv 8 blocks ...passed 00:10:17.346 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.604 Test: blockdev writev readv block ...passed 00:10:17.604 Test: blockdev writev readv size > 128k ...passed 00:10:17.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.604 Test: blockdev comparev and writev ...[2024-07-25 03:53:32.656191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.656227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.656263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.656283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.656703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.656728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.656750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.657144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.657168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.657190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.657206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.657624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.657669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.604 [2024-07-25 03:53:32.657684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.604 passed 00:10:17.604 Test: blockdev nvme passthru rw ...passed 00:10:17.604 Test: blockdev nvme passthru vendor specific ...[2024-07-25 03:53:32.740565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.604 [2024-07-25 03:53:32.740592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.740784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.604 [2024-07-25 03:53:32.740805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.740998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.604 [2024-07-25 03:53:32.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.604 [2024-07-25 03:53:32.741203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.604 [2024-07-25 03:53:32.741224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.604 passed 00:10:17.604 Test: blockdev nvme admin passthru ...passed 00:10:17.604 Test: blockdev copy ...passed 00:10:17.604 00:10:17.604 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.604 suites 1 1 n/a 0 0 00:10:17.604 tests 23 23 23 0 0 00:10:17.604 asserts 152 152 152 0 n/a 00:10:17.604 00:10:17.604 Elapsed time = 1.428 seconds 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.862 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.862 rmmod nvme_tcp 00:10:17.862 rmmod nvme_fabrics 00:10:17.862 rmmod nvme_keyring 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 754720 ']' 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 754720 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 754720 ']' 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 754720 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 754720 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 754720' 00:10:17.862 killing process with pid 754720 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 754720 00:10:17.862 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 754720 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.121 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.649 00:10:20.649 real 0m6.246s 00:10:20.649 user 0m10.361s 00:10:20.649 sys 0m2.080s 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.649 ************************************ 00:10:20.649 END TEST nvmf_bdevio 00:10:20.649 ************************************ 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:20.649 00:10:20.649 real 3m50.653s 00:10:20.649 user 10m1.307s 00:10:20.649 sys 1m6.960s 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.649 ************************************ 00:10:20.649 END TEST nvmf_target_core 00:10:20.649 ************************************ 00:10:20.649 03:53:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.649 03:53:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.649 03:53:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.649 03:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.649 ************************************ 00:10:20.649 START TEST nvmf_target_extra 00:10:20.649 ************************************ 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.649 * Looking for test storage... 00:10:20.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.649 03:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.650 ************************************ 00:10:20.650 START TEST nvmf_example 00:10:20.650 ************************************ 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.650 * Looking for test storage... 00:10:20.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.650 03:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.551 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.551 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.551 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:22.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:22.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:22.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:22.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:10:22.552 00:10:22.552 --- 10.0.0.2 ping statistics --- 00:10:22.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.552 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:22.552 00:10:22.552 --- 10.0.0.1 ping statistics --- 00:10:22.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.552 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:22.552 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=756989 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 756989 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 756989 ']' 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.553 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.553 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:22.811 03:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:22.811 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.003 Initializing NVMe Controllers 00:10:35.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.003 Initialization complete. Launching workers. 00:10:35.003 ======================================================== 00:10:35.003 Latency(us) 00:10:35.003 Device Information : IOPS MiB/s Average min max 00:10:35.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14921.46 58.29 4288.82 907.91 15418.62 00:10:35.003 ======================================================== 00:10:35.003 Total : 14921.46 58.29 4288.82 907.91 15418.62 00:10:35.003 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.003 rmmod nvme_tcp 00:10:35.003 rmmod nvme_fabrics 00:10:35.003 rmmod nvme_keyring 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 756989 ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 756989 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 756989 ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 756989 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 756989 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 756989' 00:10:35.003 killing process with pid 756989 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 756989 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 756989 00:10:35.003 nvmf threads initialize successfully 00:10:35.003 bdev subsystem init successfully 00:10:35.003 created a nvmf target service 00:10:35.003 create targets's poll groups done 00:10:35.003 all subsystems of target started 00:10:35.003 nvmf target is running 00:10:35.003 all subsystems of target stopped 00:10:35.003 destroy targets's poll groups done 00:10:35.003 destroyed the nvmf target service 00:10:35.003 bdev subsystem finish successfully 00:10:35.003 nvmf threads destroy successfully 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.003 03:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.573 00:10:35.573 real 0m15.089s 00:10:35.573 user 0m42.327s 00:10:35.573 sys 0m3.152s 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.573 ************************************ 00:10:35.573 END TEST nvmf_example 00:10:35.573 ************************************ 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.573 ************************************ 00:10:35.573 START TEST nvmf_filesystem 00:10:35.573 ************************************ 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:35.573 * Looking for test storage... 00:10:35.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:35.573 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:35.574 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:35.575 #define SPDK_CONFIG_H 00:10:35.575 #define SPDK_CONFIG_APPS 1 00:10:35.575 #define SPDK_CONFIG_ARCH native 00:10:35.575 #undef SPDK_CONFIG_ASAN 00:10:35.575 #undef SPDK_CONFIG_AVAHI 00:10:35.575 #undef SPDK_CONFIG_CET 00:10:35.575 #define SPDK_CONFIG_COVERAGE 1 00:10:35.575 #define SPDK_CONFIG_CROSS_PREFIX 00:10:35.575 #undef SPDK_CONFIG_CRYPTO 00:10:35.575 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:35.575 #undef SPDK_CONFIG_CUSTOMOCF 00:10:35.575 #undef SPDK_CONFIG_DAOS 00:10:35.575 #define SPDK_CONFIG_DAOS_DIR 00:10:35.575 #define SPDK_CONFIG_DEBUG 1 00:10:35.575 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:35.575 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.575 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:35.575 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.575 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:35.575 #undef SPDK_CONFIG_DPDK_UADK 00:10:35.575 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.575 #define SPDK_CONFIG_EXAMPLES 1 00:10:35.575 #undef SPDK_CONFIG_FC 00:10:35.575 #define SPDK_CONFIG_FC_PATH 00:10:35.575 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:35.575 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:35.575 #undef SPDK_CONFIG_FUSE 00:10:35.575 #undef SPDK_CONFIG_FUZZER 00:10:35.575 #define SPDK_CONFIG_FUZZER_LIB 00:10:35.575 #undef SPDK_CONFIG_GOLANG 00:10:35.575 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:35.575 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:35.575 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:35.575 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:35.575 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:35.575 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:35.575 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:35.575 #define SPDK_CONFIG_IDXD 1 00:10:35.575 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:35.575 #undef SPDK_CONFIG_IPSEC_MB 00:10:35.575 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:35.575 #define SPDK_CONFIG_ISAL 1 00:10:35.575 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:35.575 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:35.575 #define SPDK_CONFIG_LIBDIR 00:10:35.575 #undef SPDK_CONFIG_LTO 00:10:35.575 #define SPDK_CONFIG_MAX_LCORES 128 00:10:35.575 #define SPDK_CONFIG_NVME_CUSE 1 00:10:35.575 #undef SPDK_CONFIG_OCF 00:10:35.575 #define SPDK_CONFIG_OCF_PATH 00:10:35.575 #define SPDK_CONFIG_OPENSSL_PATH 00:10:35.575 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:35.575 #define SPDK_CONFIG_PGO_DIR 00:10:35.575 #undef SPDK_CONFIG_PGO_USE 00:10:35.575 #define SPDK_CONFIG_PREFIX /usr/local 00:10:35.575 #undef SPDK_CONFIG_RAID5F 00:10:35.575 #undef SPDK_CONFIG_RBD 00:10:35.575 #define SPDK_CONFIG_RDMA 1 00:10:35.575 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:35.575 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:35.575 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:35.575 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:35.575 #define SPDK_CONFIG_SHARED 1 00:10:35.575 #undef SPDK_CONFIG_SMA 00:10:35.575 #define SPDK_CONFIG_TESTS 1 00:10:35.575 #undef SPDK_CONFIG_TSAN 00:10:35.575 #define SPDK_CONFIG_UBLK 1 00:10:35.575 #define SPDK_CONFIG_UBSAN 1 00:10:35.575 #undef SPDK_CONFIG_UNIT_TESTS 00:10:35.575 #undef SPDK_CONFIG_URING 00:10:35.575 #define SPDK_CONFIG_URING_PATH 00:10:35.575 #undef SPDK_CONFIG_URING_ZNS 00:10:35.575 #undef SPDK_CONFIG_USDT 00:10:35.575 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:35.575 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:35.575 #define SPDK_CONFIG_VFIO_USER 1 00:10:35.575 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:35.575 #define SPDK_CONFIG_VHOST 1 00:10:35.575 #define SPDK_CONFIG_VIRTIO 1 00:10:35.575 #undef SPDK_CONFIG_VTUNE 00:10:35.575 #define SPDK_CONFIG_VTUNE_DIR 00:10:35.575 #define SPDK_CONFIG_WERROR 1 00:10:35.575 #define SPDK_CONFIG_WPDK_DIR 00:10:35.575 #undef SPDK_CONFIG_XNVME 00:10:35.575 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:35.575 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:35.576 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 758676 ]] 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 758676 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.GVgoUf 00:10:35.577 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GVgoUf/tests/target /tmp/spdk.GVgoUf 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=53885181952 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994729472 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=8109547520 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935183360 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376539136 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398948352 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996439040 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=925696 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199468032 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199472128 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:35.578 * Looking for test storage... 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=53885181952 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10324140032 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:35.578 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.579 03:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.111 03:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:10:38.111 00:10:38.111 --- 10.0.0.2 ping statistics --- 00:10:38.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.111 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:10:38.111 00:10:38.111 --- 10.0.0.1 ping statistics --- 00:10:38.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.111 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.111 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.112 ************************************ 00:10:38.112 START TEST nvmf_filesystem_no_in_capsule 00:10:38.112 ************************************ 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=760301 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 760301 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 760301 ']' 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.112 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.112 [2024-07-25 03:53:53.167723] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:38.112 [2024-07-25 03:53:53.167800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.112 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.112 [2024-07-25 03:53:53.205041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:38.112 [2024-07-25 03:53:53.237944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.112 [2024-07-25 03:53:53.337589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.112 [2024-07-25 03:53:53.337656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.112 [2024-07-25 03:53:53.337672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.112 [2024-07-25 03:53:53.337686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.112 [2024-07-25 03:53:53.337706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.112 [2024-07-25 03:53:53.337773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.112 [2024-07-25 03:53:53.337826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.112 [2024-07-25 03:53:53.337879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.112 [2024-07-25 03:53:53.337882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 [2024-07-25 03:53:53.490776] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 Malloc1 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.370 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.627 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.627 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.627 [2024-07-25 03:53:53.673378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.627 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.627 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:38.628 { 00:10:38.628 "name": "Malloc1", 00:10:38.628 "aliases": [ 00:10:38.628 "6ae7c640-64f9-4687-9aa1-a0f5e18eaa5a" 00:10:38.628 ], 00:10:38.628 "product_name": "Malloc disk", 00:10:38.628 "block_size": 512, 00:10:38.628 "num_blocks": 1048576, 00:10:38.628 "uuid": "6ae7c640-64f9-4687-9aa1-a0f5e18eaa5a", 00:10:38.628 "assigned_rate_limits": { 00:10:38.628 "rw_ios_per_sec": 0, 00:10:38.628 "rw_mbytes_per_sec": 0, 00:10:38.628 "r_mbytes_per_sec": 0, 00:10:38.628 "w_mbytes_per_sec": 0 00:10:38.628 }, 00:10:38.628 "claimed": true, 00:10:38.628 "claim_type": "exclusive_write", 00:10:38.628 "zoned": false, 00:10:38.628 "supported_io_types": { 00:10:38.628 "read": true, 00:10:38.628 "write": true, 00:10:38.628 "unmap": true, 00:10:38.628 "flush": true, 00:10:38.628 "reset": true, 00:10:38.628 "nvme_admin": false, 00:10:38.628 "nvme_io": false, 00:10:38.628 "nvme_io_md": false, 00:10:38.628 "write_zeroes": true, 00:10:38.628 "zcopy": true, 00:10:38.628 "get_zone_info": false, 00:10:38.628 "zone_management": false, 00:10:38.628 "zone_append": false, 00:10:38.628 "compare": false, 00:10:38.628 "compare_and_write": false, 00:10:38.628 "abort": true, 00:10:38.628 "seek_hole": false, 00:10:38.628 "seek_data": false, 00:10:38.628 "copy": true, 00:10:38.628 "nvme_iov_md": false 00:10:38.628 }, 00:10:38.628 "memory_domains": [ 00:10:38.628 { 00:10:38.628 "dma_device_id": "system", 00:10:38.628 "dma_device_type": 1 00:10:38.628 }, 00:10:38.628 { 00:10:38.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.628 "dma_device_type": 2 00:10:38.628 } 00:10:38.628 ], 00:10:38.628 "driver_specific": {} 00:10:38.628 } 00:10:38.628 ]' 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:38.628 03:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.192 03:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.192 03:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.192 03:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.192 03:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.192 03:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:41.121 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.686 03:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.251 03:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.622 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.622 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.622 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.622 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.622 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.623 ************************************ 00:10:43.623 START TEST filesystem_ext4 00:10:43.623 ************************************ 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:43.623 03:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.623 mke2fs 1.46.5 (30-Dec-2021) 00:10:43.623 Discarding device blocks: 0/522240 done 00:10:43.623 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.623 Filesystem UUID: a687a26d-9613-489d-88c2-ca950eb7a6bc 00:10:43.623 Superblock backups stored on blocks: 00:10:43.623 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.623 00:10:43.623 Allocating group tables: 0/64 done 00:10:43.623 Writing inode tables: 0/64 done 00:10:46.147 Creating journal (8192 blocks): done 00:10:46.969 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:10:46.969 00:10:46.969 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:46.969 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.226 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 760301 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.484 00:10:47.484 real 0m4.110s 00:10:47.484 user 0m0.018s 00:10:47.484 sys 0m0.060s 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:47.484 ************************************ 00:10:47.484 END TEST filesystem_ext4 00:10:47.484 ************************************ 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.484 ************************************ 00:10:47.484 START TEST filesystem_btrfs 00:10:47.484 ************************************ 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:47.484 03:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:48.048 btrfs-progs v6.6.2 00:10:48.048 See https://btrfs.readthedocs.io for more information. 00:10:48.048 00:10:48.048 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:48.048 NOTE: several default settings have changed in version 5.15, please make sure 00:10:48.048 this does not affect your deployments: 00:10:48.049 - DUP for metadata (-m dup) 00:10:48.049 - enabled no-holes (-O no-holes) 00:10:48.049 - enabled free-space-tree (-R free-space-tree) 00:10:48.049 00:10:48.049 Label: (null) 00:10:48.049 UUID: 6260a7b7-03c5-43f8-9bd3-c5a55a5d3df2 00:10:48.049 Node size: 16384 00:10:48.049 Sector size: 4096 00:10:48.049 Filesystem size: 510.00MiB 00:10:48.049 Block group profiles: 00:10:48.049 Data: single 8.00MiB 00:10:48.049 Metadata: DUP 32.00MiB 00:10:48.049 System: DUP 8.00MiB 00:10:48.049 SSD detected: yes 00:10:48.049 Zoned device: no 00:10:48.049 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:48.049 Runtime features: free-space-tree 00:10:48.049 Checksum: crc32c 00:10:48.049 Number of devices: 1 00:10:48.049 Devices: 00:10:48.049 ID SIZE PATH 00:10:48.049 1 510.00MiB /dev/nvme0n1p1 00:10:48.049 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:48.049 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 760301 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.306 00:10:48.306 real 0m0.692s 00:10:48.306 user 0m0.021s 00:10:48.306 sys 0m0.119s 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:48.306 ************************************ 00:10:48.306 END TEST filesystem_btrfs 00:10:48.306 ************************************ 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.306 ************************************ 00:10:48.306 START TEST filesystem_xfs 00:10:48.306 ************************************ 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:48.306 03:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:48.306 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:48.306 = sectsz=512 attr=2, projid32bit=1 00:10:48.306 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:48.306 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:48.306 data = bsize=4096 blocks=130560, imaxpct=25 00:10:48.306 = sunit=0 swidth=0 blks 00:10:48.306 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:48.306 log =internal log bsize=4096 blocks=16384, version=2 00:10:48.306 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:48.306 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:49.236 Discarding blocks...Done. 00:10:49.236 03:54:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:49.236 03:54:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 760301 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.759 00:10:51.759 real 0m3.071s 00:10:51.759 user 0m0.018s 00:10:51.759 sys 0m0.061s 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.759 ************************************ 00:10:51.759 END TEST filesystem_xfs 00:10:51.759 ************************************ 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:51.759 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 760301 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 760301 ']' 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 760301 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 760301 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 760301' 00:10:51.760 killing process with pid 760301 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 760301 00:10:51.760 03:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 760301 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:52.017 00:10:52.017 real 0m14.023s 00:10:52.017 user 0m53.955s 00:10:52.017 sys 0m1.956s 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.017 ************************************ 00:10:52.017 END TEST nvmf_filesystem_no_in_capsule 00:10:52.017 ************************************ 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.017 ************************************ 00:10:52.017 START TEST nvmf_filesystem_in_capsule 00:10:52.017 ************************************ 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=762417 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 762417 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 762417 ']' 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.017 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.018 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.018 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.018 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.018 [2024-07-25 03:54:07.235603] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:52.018 [2024-07-25 03:54:07.235709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.018 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.018 [2024-07-25 03:54:07.275825] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:52.018 [2024-07-25 03:54:07.303872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.275 [2024-07-25 03:54:07.400085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.275 [2024-07-25 03:54:07.400155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.275 [2024-07-25 03:54:07.400171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.275 [2024-07-25 03:54:07.400185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.275 [2024-07-25 03:54:07.400197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.275 [2024-07-25 03:54:07.400285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.275 [2024-07-25 03:54:07.400340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.275 [2024-07-25 03:54:07.400405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.275 [2024-07-25 03:54:07.400408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.275 [2024-07-25 03:54:07.560582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.275 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.533 Malloc1 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.533 [2024-07-25 03:54:07.737549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:52.533 { 00:10:52.533 "name": "Malloc1", 00:10:52.533 "aliases": [ 00:10:52.533 "f342afd6-c6f3-4d59-bb30-afb14a437599" 00:10:52.533 ], 00:10:52.533 "product_name": "Malloc disk", 00:10:52.533 "block_size": 512, 00:10:52.533 "num_blocks": 1048576, 00:10:52.533 "uuid": "f342afd6-c6f3-4d59-bb30-afb14a437599", 00:10:52.533 "assigned_rate_limits": { 00:10:52.533 "rw_ios_per_sec": 0, 00:10:52.533 "rw_mbytes_per_sec": 0, 00:10:52.533 "r_mbytes_per_sec": 0, 00:10:52.533 "w_mbytes_per_sec": 0 00:10:52.533 }, 00:10:52.533 "claimed": true, 00:10:52.533 "claim_type": "exclusive_write", 00:10:52.533 "zoned": false, 00:10:52.533 "supported_io_types": { 00:10:52.533 "read": true, 00:10:52.533 "write": true, 00:10:52.533 "unmap": true, 00:10:52.533 "flush": true, 00:10:52.533 "reset": true, 00:10:52.533 "nvme_admin": false, 00:10:52.533 "nvme_io": false, 00:10:52.533 "nvme_io_md": false, 00:10:52.533 "write_zeroes": true, 00:10:52.533 "zcopy": true, 00:10:52.533 "get_zone_info": false, 00:10:52.533 "zone_management": false, 00:10:52.533 "zone_append": false, 00:10:52.533 "compare": false, 00:10:52.533 "compare_and_write": false, 00:10:52.533 "abort": true, 00:10:52.533 "seek_hole": false, 00:10:52.533 "seek_data": false, 00:10:52.533 "copy": true, 00:10:52.533 "nvme_iov_md": false 00:10:52.533 }, 00:10:52.533 "memory_domains": [ 00:10:52.533 { 00:10:52.533 "dma_device_id": "system", 00:10:52.533 "dma_device_type": 1 00:10:52.533 }, 00:10:52.533 { 00:10:52.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.533 "dma_device_type": 2 00:10:52.533 } 00:10:52.533 ], 00:10:52.533 "driver_specific": {} 00:10:52.533 } 00:10:52.533 ]' 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:52.533 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:52.791 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:52.791 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:52.791 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:52.791 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:52.791 03:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.354 03:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.354 03:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.354 03:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.354 03:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.354 03:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.249 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:55.507 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:55.764 03:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:56.696 03:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:56.696 03:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:56.696 03:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:56.696 03:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.696 03:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.954 ************************************ 00:10:56.954 START TEST filesystem_in_capsule_ext4 00:10:56.954 ************************************ 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:56.955 03:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:56.955 mke2fs 1.46.5 (30-Dec-2021) 00:10:56.955 Discarding device blocks: 0/522240 done 00:10:56.955 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.955 Filesystem UUID: d598302f-b980-4fcc-bef8-2b57d92e3f48 00:10:56.955 Superblock backups stored on blocks: 00:10:56.955 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.955 00:10:56.955 Allocating group tables: 0/64 done 00:10:56.955 Writing inode tables: 0/64 done 00:11:00.267 Creating journal (8192 blocks): done 00:11:00.267 Writing superblocks and filesystem accounting information: 0/64 done 00:11:00.267 00:11:00.267 03:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:00.267 03:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.525 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.782 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:00.782 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 762417 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.783 00:11:00.783 real 0m3.924s 00:11:00.783 user 0m0.019s 00:11:00.783 sys 0m0.057s 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:00.783 ************************************ 00:11:00.783 END TEST filesystem_in_capsule_ext4 00:11:00.783 ************************************ 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.783 ************************************ 00:11:00.783 START TEST filesystem_in_capsule_btrfs 00:11:00.783 ************************************ 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:00.783 03:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:01.041 btrfs-progs v6.6.2 00:11:01.041 See https://btrfs.readthedocs.io for more information. 00:11:01.041 00:11:01.041 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:01.041 NOTE: several default settings have changed in version 5.15, please make sure 00:11:01.041 this does not affect your deployments: 00:11:01.041 - DUP for metadata (-m dup) 00:11:01.041 - enabled no-holes (-O no-holes) 00:11:01.041 - enabled free-space-tree (-R free-space-tree) 00:11:01.041 00:11:01.041 Label: (null) 00:11:01.041 UUID: 11c5390a-019a-4039-8863-dae7fc2540cf 00:11:01.041 Node size: 16384 00:11:01.041 Sector size: 4096 00:11:01.041 Filesystem size: 510.00MiB 00:11:01.041 Block group profiles: 00:11:01.041 Data: single 8.00MiB 00:11:01.041 Metadata: DUP 32.00MiB 00:11:01.041 System: DUP 8.00MiB 00:11:01.041 SSD detected: yes 00:11:01.041 Zoned device: no 00:11:01.041 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:01.041 Runtime features: free-space-tree 00:11:01.041 Checksum: crc32c 00:11:01.041 Number of devices: 1 00:11:01.041 Devices: 00:11:01.041 ID SIZE PATH 00:11:01.041 1 510.00MiB /dev/nvme0n1p1 00:11:01.041 00:11:01.041 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:01.041 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 762417 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.607 00:11:01.607 real 0m0.694s 00:11:01.607 user 0m0.013s 00:11:01.607 sys 0m0.118s 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 ************************************ 00:11:01.607 END TEST filesystem_in_capsule_btrfs 00:11:01.607 ************************************ 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 ************************************ 00:11:01.607 START TEST filesystem_in_capsule_xfs 00:11:01.607 ************************************ 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:01.607 03:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:01.607 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:01.607 = sectsz=512 attr=2, projid32bit=1 00:11:01.607 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:01.607 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:01.607 data = bsize=4096 blocks=130560, imaxpct=25 00:11:01.607 = sunit=0 swidth=0 blks 00:11:01.607 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:01.607 log =internal log bsize=4096 blocks=16384, version=2 00:11:01.607 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:01.607 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:02.540 Discarding blocks...Done. 00:11:02.540 03:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:02.540 03:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 762417 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.435 00:11:04.435 real 0m2.928s 00:11:04.435 user 0m0.014s 00:11:04.435 sys 0m0.061s 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.435 ************************************ 00:11:04.435 END TEST filesystem_in_capsule_xfs 00:11:04.435 ************************************ 00:11:04.435 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:04.692 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:04.692 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 762417 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 762417 ']' 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 762417 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 762417 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 762417' 00:11:04.693 killing process with pid 762417 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 762417 00:11:04.693 03:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 762417 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:05.258 00:11:05.258 real 0m13.133s 00:11:05.258 user 0m50.399s 00:11:05.258 sys 0m1.929s 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.258 ************************************ 00:11:05.258 END TEST nvmf_filesystem_in_capsule 00:11:05.258 ************************************ 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.258 rmmod nvme_tcp 00:11:05.258 rmmod nvme_fabrics 00:11:05.258 rmmod nvme_keyring 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:05.258 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.259 03:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.159 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.159 00:11:07.159 real 0m31.776s 00:11:07.159 user 1m45.266s 00:11:07.159 sys 0m5.598s 00:11:07.159 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.159 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.159 ************************************ 00:11:07.159 END TEST nvmf_filesystem 00:11:07.159 ************************************ 00:11:07.418 03:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:07.418 03:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.418 03:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.418 03:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.418 ************************************ 00:11:07.418 START TEST nvmf_target_discovery 00:11:07.418 ************************************ 00:11:07.418 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:07.418 * Looking for test storage... 00:11:07.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.419 03:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.316 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:09.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:09.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:09.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:09.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:11:09.317 00:11:09.317 --- 10.0.0.2 ping statistics --- 00:11:09.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.317 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:11:09.317 00:11:09.317 --- 10.0.0.1 ping statistics --- 00:11:09.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.317 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=766363 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 766363 00:11:09.317 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 766363 ']' 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.318 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 [2024-07-25 03:54:24.647773] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:11:09.575 [2024-07-25 03:54:24.647858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.575 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.575 [2024-07-25 03:54:24.684852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:09.575 [2024-07-25 03:54:24.717099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.575 [2024-07-25 03:54:24.812196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.575 [2024-07-25 03:54:24.812281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.575 [2024-07-25 03:54:24.812299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.575 [2024-07-25 03:54:24.812313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.575 [2024-07-25 03:54:24.812324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.575 [2024-07-25 03:54:24.812382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.575 [2024-07-25 03:54:24.812439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.575 [2024-07-25 03:54:24.812490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.575 [2024-07-25 03:54:24.812493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 [2024-07-25 03:54:24.973912] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 Null1 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 [2024-07-25 03:54:25.014276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 Null2 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 Null3 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 Null4 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.833 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.834 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:10.091 00:11:10.091 Discovery Log Number of Records 6, Generation counter 6 00:11:10.091 =====Discovery Log Entry 0====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: current discovery subsystem 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4420 00:11:10.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: explicit discovery connections, duplicate discovery information 00:11:10.091 sectype: none 00:11:10.091 =====Discovery Log Entry 1====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: nvme subsystem 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4420 00:11:10.091 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: none 00:11:10.091 sectype: none 00:11:10.091 =====Discovery Log Entry 2====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: nvme subsystem 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4420 00:11:10.091 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: none 00:11:10.091 sectype: none 00:11:10.091 =====Discovery Log Entry 3====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: nvme subsystem 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4420 00:11:10.091 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: none 00:11:10.091 sectype: none 00:11:10.091 =====Discovery Log Entry 4====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: nvme subsystem 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4420 00:11:10.091 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: none 00:11:10.091 sectype: none 00:11:10.091 =====Discovery Log Entry 5====== 00:11:10.091 trtype: tcp 00:11:10.091 adrfam: ipv4 00:11:10.091 subtype: discovery subsystem referral 00:11:10.091 treq: not required 00:11:10.091 portid: 0 00:11:10.091 trsvcid: 4430 00:11:10.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:10.091 traddr: 10.0.0.2 00:11:10.091 eflags: none 00:11:10.091 sectype: none 00:11:10.091 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:10.091 Perform nvmf subsystem discovery via RPC 00:11:10.091 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:10.091 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.091 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.091 [ 00:11:10.091 { 00:11:10.091 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:10.091 "subtype": "Discovery", 00:11:10.091 "listen_addresses": [ 00:11:10.091 { 00:11:10.091 "trtype": "TCP", 00:11:10.091 "adrfam": "IPv4", 00:11:10.091 "traddr": "10.0.0.2", 00:11:10.091 "trsvcid": "4420" 00:11:10.091 } 00:11:10.091 ], 00:11:10.091 "allow_any_host": true, 00:11:10.091 "hosts": [] 00:11:10.091 }, 00:11:10.091 { 00:11:10.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.091 "subtype": "NVMe", 00:11:10.091 "listen_addresses": [ 00:11:10.091 { 00:11:10.091 "trtype": "TCP", 00:11:10.091 "adrfam": "IPv4", 00:11:10.091 "traddr": "10.0.0.2", 00:11:10.091 "trsvcid": "4420" 00:11:10.091 } 00:11:10.091 ], 00:11:10.091 "allow_any_host": true, 00:11:10.091 "hosts": [], 00:11:10.091 "serial_number": "SPDK00000000000001", 00:11:10.091 "model_number": "SPDK bdev Controller", 00:11:10.091 "max_namespaces": 32, 00:11:10.091 "min_cntlid": 1, 00:11:10.091 "max_cntlid": 65519, 00:11:10.091 "namespaces": [ 00:11:10.091 { 00:11:10.091 "nsid": 1, 00:11:10.091 "bdev_name": "Null1", 00:11:10.091 "name": "Null1", 00:11:10.091 "nguid": "2D2F26F1CED5457B83C221FF26C06473", 00:11:10.091 "uuid": "2d2f26f1-ced5-457b-83c2-21ff26c06473" 00:11:10.091 } 00:11:10.091 ] 00:11:10.091 }, 00:11:10.091 { 00:11:10.091 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:10.091 "subtype": "NVMe", 00:11:10.091 "listen_addresses": [ 00:11:10.091 { 00:11:10.091 "trtype": "TCP", 00:11:10.091 "adrfam": "IPv4", 00:11:10.091 "traddr": "10.0.0.2", 00:11:10.091 "trsvcid": "4420" 00:11:10.091 } 00:11:10.091 ], 00:11:10.091 "allow_any_host": true, 00:11:10.091 "hosts": [], 00:11:10.091 "serial_number": "SPDK00000000000002", 00:11:10.091 "model_number": "SPDK bdev Controller", 00:11:10.091 "max_namespaces": 32, 00:11:10.091 "min_cntlid": 1, 00:11:10.091 "max_cntlid": 65519, 00:11:10.091 "namespaces": [ 00:11:10.091 { 00:11:10.091 "nsid": 1, 00:11:10.091 "bdev_name": "Null2", 00:11:10.091 "name": "Null2", 00:11:10.091 "nguid": "7770D66149624134A84FC691A6F0E019", 00:11:10.091 "uuid": "7770d661-4962-4134-a84f-c691a6f0e019" 00:11:10.091 } 00:11:10.091 ] 00:11:10.091 }, 00:11:10.091 { 00:11:10.091 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:10.091 "subtype": "NVMe", 00:11:10.091 "listen_addresses": [ 00:11:10.091 { 00:11:10.091 "trtype": "TCP", 00:11:10.091 "adrfam": "IPv4", 00:11:10.091 "traddr": "10.0.0.2", 00:11:10.091 "trsvcid": "4420" 00:11:10.091 } 00:11:10.091 ], 00:11:10.091 "allow_any_host": true, 00:11:10.091 "hosts": [], 00:11:10.091 "serial_number": "SPDK00000000000003", 00:11:10.091 "model_number": "SPDK bdev Controller", 00:11:10.091 "max_namespaces": 32, 00:11:10.091 "min_cntlid": 1, 00:11:10.091 "max_cntlid": 65519, 00:11:10.091 "namespaces": [ 00:11:10.091 { 00:11:10.091 "nsid": 1, 00:11:10.091 "bdev_name": "Null3", 00:11:10.091 "name": "Null3", 00:11:10.091 "nguid": "A16D65F9D20B4E1E97FA454B022FA695", 00:11:10.091 "uuid": "a16d65f9-d20b-4e1e-97fa-454b022fa695" 00:11:10.091 } 00:11:10.091 ] 00:11:10.091 }, 00:11:10.091 { 00:11:10.091 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:10.091 "subtype": "NVMe", 00:11:10.091 "listen_addresses": [ 00:11:10.091 { 00:11:10.091 "trtype": "TCP", 00:11:10.091 "adrfam": "IPv4", 00:11:10.091 "traddr": "10.0.0.2", 00:11:10.091 "trsvcid": "4420" 00:11:10.091 } 00:11:10.091 ], 00:11:10.091 "allow_any_host": true, 00:11:10.091 "hosts": [], 00:11:10.091 "serial_number": "SPDK00000000000004", 00:11:10.091 "model_number": "SPDK bdev Controller", 00:11:10.091 "max_namespaces": 32, 00:11:10.091 "min_cntlid": 1, 00:11:10.091 "max_cntlid": 65519, 00:11:10.091 "namespaces": [ 00:11:10.091 { 00:11:10.091 "nsid": 1, 00:11:10.091 "bdev_name": "Null4", 00:11:10.091 "name": "Null4", 00:11:10.092 "nguid": "971AE30E0FE1467B895E183DB8776E21", 00:11:10.092 "uuid": "971ae30e-0fe1-467b-895e-183db8776e21" 00:11:10.092 } 00:11:10.092 ] 00:11:10.092 } 00:11:10.092 ] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.092 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.349 rmmod nvme_tcp 00:11:10.349 rmmod nvme_fabrics 00:11:10.349 rmmod nvme_keyring 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 766363 ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 766363 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 766363 ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 766363 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 766363 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 766363' 00:11:10.349 killing process with pid 766363 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 766363 00:11:10.349 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 766363 00:11:10.608 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.609 03:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.510 00:11:12.510 real 0m5.279s 00:11:12.510 user 0m4.350s 00:11:12.510 sys 0m1.753s 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.510 ************************************ 00:11:12.510 END TEST nvmf_target_discovery 00:11:12.510 ************************************ 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.510 03:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.769 ************************************ 00:11:12.769 START TEST nvmf_referrals 00:11:12.769 ************************************ 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:12.769 * Looking for test storage... 00:11:12.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.769 03:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:14.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:14.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:14.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:14.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.670 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.929 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.929 03:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:14.929 00:11:14.929 --- 10.0.0.2 ping statistics --- 00:11:14.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.929 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:11:14.929 00:11:14.929 --- 10.0.0.1 ping statistics --- 00:11:14.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.929 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=768450 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 768450 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 768450 ']' 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.929 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.929 [2024-07-25 03:54:30.186312] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:11:14.929 [2024-07-25 03:54:30.186408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.929 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.929 [2024-07-25 03:54:30.227207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:15.187 [2024-07-25 03:54:30.254824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.187 [2024-07-25 03:54:30.346584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.187 [2024-07-25 03:54:30.346645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.187 [2024-07-25 03:54:30.346674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.187 [2024-07-25 03:54:30.346685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.187 [2024-07-25 03:54:30.346695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.187 [2024-07-25 03:54:30.346793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.187 [2024-07-25 03:54:30.346844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.187 [2024-07-25 03:54:30.346892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.187 [2024-07-25 03:54:30.346894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.187 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.187 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:15.187 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.187 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.187 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 [2024-07-25 03:54:30.504783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 [2024-07-25 03:54:30.517059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.445 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.703 03:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:15.994 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.995 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:16.274 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:16.275 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.275 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:16.275 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.532 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.790 rmmod nvme_tcp 00:11:16.790 rmmod nvme_fabrics 00:11:16.790 rmmod nvme_keyring 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 768450 ']' 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 768450 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 768450 ']' 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 768450 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.790 03:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 768450 00:11:16.790 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.790 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.790 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 768450' 00:11:16.790 killing process with pid 768450 00:11:16.790 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 768450 00:11:16.790 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 768450 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.049 03:54:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.580 00:11:19.580 real 0m6.473s 00:11:19.580 user 0m8.904s 00:11:19.580 sys 0m2.133s 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.580 ************************************ 00:11:19.580 END TEST nvmf_referrals 00:11:19.580 ************************************ 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.580 ************************************ 00:11:19.580 START TEST nvmf_connect_disconnect 00:11:19.580 ************************************ 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:19.580 * Looking for test storage... 00:11:19.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.580 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.581 03:54:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.485 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:11:21.486 00:11:21.486 --- 10.0.0.2 ping statistics --- 00:11:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.486 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:21.486 00:11:21.486 --- 10.0.0.1 ping statistics --- 00:11:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.486 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=770737 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 770737 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 770737 ']' 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.486 03:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.486 [2024-07-25 03:54:36.751786] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:11:21.486 [2024-07-25 03:54:36.751871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.744 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.744 [2024-07-25 03:54:36.789801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:21.744 [2024-07-25 03:54:36.821828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.744 [2024-07-25 03:54:36.916199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.744 [2024-07-25 03:54:36.916279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.744 [2024-07-25 03:54:36.916297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.744 [2024-07-25 03:54:36.916311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.744 [2024-07-25 03:54:36.916323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.744 [2024-07-25 03:54:36.916408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.744 [2024-07-25 03:54:36.916461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.744 [2024-07-25 03:54:36.916513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.744 [2024-07-25 03:54:36.916516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 [2024-07-25 03:54:37.073883] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.003 [2024-07-25 03:54:37.124875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:22.003 03:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:24.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.833 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.833 rmmod nvme_tcp 00:15:12.833 rmmod nvme_fabrics 00:15:12.833 rmmod nvme_keyring 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 770737 ']' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 770737 ']' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 770737' 00:15:12.834 killing process with pid 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 770737 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.834 03:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.734 00:15:14.734 real 3m55.543s 00:15:14.734 user 14m55.616s 00:15:14.734 sys 0m35.094s 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:14.734 ************************************ 00:15:14.734 END TEST nvmf_connect_disconnect 00:15:14.734 ************************************ 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.734 ************************************ 00:15:14.734 START TEST nvmf_multitarget 00:15:14.734 ************************************ 00:15:14.734 03:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:14.734 * Looking for test storage... 00:15:14.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.734 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.735 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:17.265 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:17.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.265 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:17.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:17.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:15:17.266 00:15:17.266 --- 10.0.0.2 ping statistics --- 00:15:17.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.266 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:15:17.266 00:15:17.266 --- 10.0.0.1 ping statistics --- 00:15:17.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.266 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=801727 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 801727 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 801727 ']' 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:17.266 [2024-07-25 03:58:32.264104] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:15:17.266 [2024-07-25 03:58:32.264200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.266 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.266 [2024-07-25 03:58:32.303906] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:17.266 [2024-07-25 03:58:32.330470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.266 [2024-07-25 03:58:32.422993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.266 [2024-07-25 03:58:32.423059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.266 [2024-07-25 03:58:32.423088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.266 [2024-07-25 03:58:32.423100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.266 [2024-07-25 03:58:32.423110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.266 [2024-07-25 03:58:32.424262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.266 [2024-07-25 03:58:32.424344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.266 [2024-07-25 03:58:32.424403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.266 [2024-07-25 03:58:32.424407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.266 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:17.524 "nvmf_tgt_1" 00:15:17.524 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:17.782 "nvmf_tgt_2" 00:15:17.782 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:17.782 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:17.782 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:17.782 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:18.039 true 00:15:18.039 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:18.039 true 00:15:18.039 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:18.039 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.297 rmmod nvme_tcp 00:15:18.297 rmmod nvme_fabrics 00:15:18.297 rmmod nvme_keyring 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 801727 ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 801727 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 801727 ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 801727 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 801727 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 801727' 00:15:18.297 killing process with pid 801727 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 801727 00:15:18.297 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 801727 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.555 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.457 00:15:20.457 real 0m5.780s 00:15:20.457 user 0m6.495s 00:15:20.457 sys 0m1.941s 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:20.457 ************************************ 00:15:20.457 END TEST nvmf_multitarget 00:15:20.457 ************************************ 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.457 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.715 ************************************ 00:15:20.715 START TEST nvmf_rpc 00:15:20.715 ************************************ 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:20.715 * Looking for test storage... 00:15:20.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.715 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.716 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.614 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:15:22.615 00:15:22.615 --- 10.0.0.2 ping statistics --- 00:15:22.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.615 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:22.615 00:15:22.615 --- 10.0.0.1 ping statistics --- 00:15:22.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.615 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=803827 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 803827 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 803827 ']' 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.615 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.871 [2024-07-25 03:58:37.948542] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:15:22.872 [2024-07-25 03:58:37.948637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.872 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.872 [2024-07-25 03:58:37.988112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:22.872 [2024-07-25 03:58:38.014511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.872 [2024-07-25 03:58:38.103949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.872 [2024-07-25 03:58:38.104022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.872 [2024-07-25 03:58:38.104036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.872 [2024-07-25 03:58:38.104048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.872 [2024-07-25 03:58:38.104057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.872 [2024-07-25 03:58:38.104140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.872 [2024-07-25 03:58:38.104207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.872 [2024-07-25 03:58:38.104264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.872 [2024-07-25 03:58:38.104267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:23.129 "tick_rate": 2700000000, 00:15:23.129 "poll_groups": [ 00:15:23.129 { 00:15:23.129 "name": "nvmf_tgt_poll_group_000", 00:15:23.129 "admin_qpairs": 0, 00:15:23.129 "io_qpairs": 0, 00:15:23.129 "current_admin_qpairs": 0, 00:15:23.129 "current_io_qpairs": 0, 00:15:23.129 "pending_bdev_io": 0, 00:15:23.129 "completed_nvme_io": 0, 00:15:23.129 "transports": [] 00:15:23.129 }, 00:15:23.129 { 00:15:23.129 "name": "nvmf_tgt_poll_group_001", 00:15:23.129 "admin_qpairs": 0, 00:15:23.129 "io_qpairs": 0, 00:15:23.129 "current_admin_qpairs": 0, 00:15:23.129 "current_io_qpairs": 0, 00:15:23.129 "pending_bdev_io": 0, 00:15:23.129 "completed_nvme_io": 0, 00:15:23.129 "transports": [] 00:15:23.129 }, 00:15:23.129 { 00:15:23.129 "name": "nvmf_tgt_poll_group_002", 00:15:23.129 "admin_qpairs": 0, 00:15:23.129 "io_qpairs": 0, 00:15:23.129 "current_admin_qpairs": 0, 00:15:23.129 "current_io_qpairs": 0, 00:15:23.129 "pending_bdev_io": 0, 00:15:23.129 "completed_nvme_io": 0, 00:15:23.129 "transports": [] 00:15:23.129 }, 00:15:23.129 { 00:15:23.129 "name": "nvmf_tgt_poll_group_003", 00:15:23.129 "admin_qpairs": 0, 00:15:23.129 "io_qpairs": 0, 00:15:23.129 "current_admin_qpairs": 0, 00:15:23.129 "current_io_qpairs": 0, 00:15:23.129 "pending_bdev_io": 0, 00:15:23.129 "completed_nvme_io": 0, 00:15:23.129 "transports": [] 00:15:23.129 } 00:15:23.129 ] 00:15:23.129 }' 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.129 [2024-07-25 03:58:38.353906] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.129 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:23.130 "tick_rate": 2700000000, 00:15:23.130 "poll_groups": [ 00:15:23.130 { 00:15:23.130 "name": "nvmf_tgt_poll_group_000", 00:15:23.130 "admin_qpairs": 0, 00:15:23.130 "io_qpairs": 0, 00:15:23.130 "current_admin_qpairs": 0, 00:15:23.130 "current_io_qpairs": 0, 00:15:23.130 "pending_bdev_io": 0, 00:15:23.130 "completed_nvme_io": 0, 00:15:23.130 "transports": [ 00:15:23.130 { 00:15:23.130 "trtype": "TCP" 00:15:23.130 } 00:15:23.130 ] 00:15:23.130 }, 00:15:23.130 { 00:15:23.130 "name": "nvmf_tgt_poll_group_001", 00:15:23.130 "admin_qpairs": 0, 00:15:23.130 "io_qpairs": 0, 00:15:23.130 "current_admin_qpairs": 0, 00:15:23.130 "current_io_qpairs": 0, 00:15:23.130 "pending_bdev_io": 0, 00:15:23.130 "completed_nvme_io": 0, 00:15:23.130 "transports": [ 00:15:23.130 { 00:15:23.130 "trtype": "TCP" 00:15:23.130 } 00:15:23.130 ] 00:15:23.130 }, 00:15:23.130 { 00:15:23.130 "name": "nvmf_tgt_poll_group_002", 00:15:23.130 "admin_qpairs": 0, 00:15:23.130 "io_qpairs": 0, 00:15:23.130 "current_admin_qpairs": 0, 00:15:23.130 "current_io_qpairs": 0, 00:15:23.130 "pending_bdev_io": 0, 00:15:23.130 "completed_nvme_io": 0, 00:15:23.130 "transports": [ 00:15:23.130 { 00:15:23.130 "trtype": "TCP" 00:15:23.130 } 00:15:23.130 ] 00:15:23.130 }, 00:15:23.130 { 00:15:23.130 "name": "nvmf_tgt_poll_group_003", 00:15:23.130 "admin_qpairs": 0, 00:15:23.130 "io_qpairs": 0, 00:15:23.130 "current_admin_qpairs": 0, 00:15:23.130 "current_io_qpairs": 0, 00:15:23.130 "pending_bdev_io": 0, 00:15:23.130 "completed_nvme_io": 0, 00:15:23.130 "transports": [ 00:15:23.130 { 00:15:23.130 "trtype": "TCP" 00:15:23.130 } 00:15:23.130 ] 00:15:23.130 } 00:15:23.130 ] 00:15:23.130 }' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:23.130 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 Malloc1 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 [2024-07-25 03:58:38.515663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:23.387 [2024-07-25 03:58:38.538041] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:23.387 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:23.387 could not add new controller: failed to write to nvme-fabrics device 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.387 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.949 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.949 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.949 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.949 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:23.949 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.471 [2024-07-25 03:58:41.346850] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:26.471 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:26.471 could not add new controller: failed to write to nvme-fabrics device 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.471 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.728 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.728 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.728 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.728 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:26.728 03:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:29.251 03:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 [2024-07-25 03:58:44.087606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.251 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.509 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.509 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.509 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.509 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:29.509 03:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 [2024-07-25 03:58:46.902251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:32.036 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.036 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.036 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.036 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.294 03:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.294 03:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:32.294 03:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.294 03:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:32.294 03:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 [2024-07-25 03:58:49.690824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.820 03:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.078 03:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.078 03:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:35.078 03:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.078 03:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:35.078 03:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 [2024-07-25 03:58:52.476427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.642 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.900 03:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:37.900 03:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:37.900 03:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.900 03:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:37.900 03:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.425 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 [2024-07-25 03:58:55.303613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.426 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.684 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.684 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.684 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.684 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:40.684 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.211 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:43.212 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-07-25 03:58:58.076629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-07-25 03:58:58.124662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-07-25 03:58:58.172816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-07-25 03:58:58.220988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.212 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 [2024-07-25 03:58:58.269149] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:43.213 "tick_rate": 2700000000, 00:15:43.213 "poll_groups": [ 00:15:43.213 { 00:15:43.213 "name": "nvmf_tgt_poll_group_000", 00:15:43.213 "admin_qpairs": 2, 00:15:43.213 "io_qpairs": 84, 00:15:43.213 "current_admin_qpairs": 0, 00:15:43.213 "current_io_qpairs": 0, 00:15:43.213 "pending_bdev_io": 0, 00:15:43.213 "completed_nvme_io": 135, 00:15:43.213 "transports": [ 00:15:43.213 { 00:15:43.213 "trtype": "TCP" 00:15:43.213 } 00:15:43.213 ] 00:15:43.213 }, 00:15:43.213 { 00:15:43.213 "name": "nvmf_tgt_poll_group_001", 00:15:43.213 "admin_qpairs": 2, 00:15:43.213 "io_qpairs": 84, 00:15:43.213 "current_admin_qpairs": 0, 00:15:43.213 "current_io_qpairs": 0, 00:15:43.213 "pending_bdev_io": 0, 00:15:43.213 "completed_nvme_io": 182, 00:15:43.213 "transports": [ 00:15:43.213 { 00:15:43.213 "trtype": "TCP" 00:15:43.213 } 00:15:43.213 ] 00:15:43.213 }, 00:15:43.213 { 00:15:43.213 "name": "nvmf_tgt_poll_group_002", 00:15:43.213 "admin_qpairs": 1, 00:15:43.213 "io_qpairs": 84, 00:15:43.213 "current_admin_qpairs": 0, 00:15:43.213 "current_io_qpairs": 0, 00:15:43.213 "pending_bdev_io": 0, 00:15:43.213 "completed_nvme_io": 135, 00:15:43.213 "transports": [ 00:15:43.213 { 00:15:43.213 "trtype": "TCP" 00:15:43.213 } 00:15:43.213 ] 00:15:43.213 }, 00:15:43.213 { 00:15:43.213 "name": "nvmf_tgt_poll_group_003", 00:15:43.213 "admin_qpairs": 2, 00:15:43.213 "io_qpairs": 84, 00:15:43.213 "current_admin_qpairs": 0, 00:15:43.213 "current_io_qpairs": 0, 00:15:43.213 "pending_bdev_io": 0, 00:15:43.213 "completed_nvme_io": 234, 00:15:43.213 "transports": [ 00:15:43.213 { 00:15:43.213 "trtype": "TCP" 00:15:43.213 } 00:15:43.213 ] 00:15:43.213 } 00:15:43.213 ] 00:15:43.213 }' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.213 rmmod nvme_tcp 00:15:43.213 rmmod nvme_fabrics 00:15:43.213 rmmod nvme_keyring 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 803827 ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 803827 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 803827 ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 803827 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 803827 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 803827' 00:15:43.213 killing process with pid 803827 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 803827 00:15:43.213 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 803827 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.472 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.004 00:15:46.004 real 0m25.032s 00:15:46.004 user 1m21.664s 00:15:46.004 sys 0m4.082s 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.004 ************************************ 00:15:46.004 END TEST nvmf_rpc 00:15:46.004 ************************************ 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.004 ************************************ 00:15:46.004 START TEST nvmf_invalid 00:15:46.004 ************************************ 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:46.004 * Looking for test storage... 00:15:46.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.004 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.005 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.906 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:47.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:47.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:47.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:47.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.907 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:15:47.907 00:15:47.907 --- 10.0.0.2 ping statistics --- 00:15:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.907 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:15:47.907 00:15:47.907 --- 10.0.0.1 ping statistics --- 00:15:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.907 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=808297 00:15:47.907 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 808297 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 808297 ']' 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.908 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.908 [2024-07-25 03:59:03.084181] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:15:47.908 [2024-07-25 03:59:03.084275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.908 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.908 [2024-07-25 03:59:03.122474] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:47.908 [2024-07-25 03:59:03.155014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.165 [2024-07-25 03:59:03.249585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.165 [2024-07-25 03:59:03.249648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.165 [2024-07-25 03:59:03.249665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.165 [2024-07-25 03:59:03.249678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.165 [2024-07-25 03:59:03.249690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.165 [2024-07-25 03:59:03.249780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.165 [2024-07-25 03:59:03.249835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.165 [2024-07-25 03:59:03.249883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.165 [2024-07-25 03:59:03.249886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:48.165 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8060 00:15:48.422 [2024-07-25 03:59:03.628432] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:48.422 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:48.422 { 00:15:48.422 "nqn": "nqn.2016-06.io.spdk:cnode8060", 00:15:48.422 "tgt_name": "foobar", 00:15:48.422 "method": "nvmf_create_subsystem", 00:15:48.422 "req_id": 1 00:15:48.422 } 00:15:48.422 Got JSON-RPC error response 00:15:48.422 response: 00:15:48.422 { 00:15:48.422 "code": -32603, 00:15:48.422 "message": "Unable to find target foobar" 00:15:48.422 }' 00:15:48.422 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:48.422 { 00:15:48.422 "nqn": "nqn.2016-06.io.spdk:cnode8060", 00:15:48.422 "tgt_name": "foobar", 00:15:48.422 "method": "nvmf_create_subsystem", 00:15:48.422 "req_id": 1 00:15:48.422 } 00:15:48.422 Got JSON-RPC error response 00:15:48.422 response: 00:15:48.422 { 00:15:48.422 "code": -32603, 00:15:48.422 "message": "Unable to find target foobar" 00:15:48.422 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:48.422 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:48.422 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22473 00:15:48.678 [2024-07-25 03:59:03.873228] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22473: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:48.678 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:48.678 { 00:15:48.678 "nqn": "nqn.2016-06.io.spdk:cnode22473", 00:15:48.678 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.678 "method": "nvmf_create_subsystem", 00:15:48.678 "req_id": 1 00:15:48.678 } 00:15:48.678 Got JSON-RPC error response 00:15:48.678 response: 00:15:48.678 { 00:15:48.678 "code": -32602, 00:15:48.678 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.678 }' 00:15:48.678 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:48.678 { 00:15:48.678 "nqn": "nqn.2016-06.io.spdk:cnode22473", 00:15:48.678 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.678 "method": "nvmf_create_subsystem", 00:15:48.678 "req_id": 1 00:15:48.678 } 00:15:48.678 Got JSON-RPC error response 00:15:48.678 response: 00:15:48.678 { 00:15:48.678 "code": -32602, 00:15:48.678 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.678 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.678 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:48.678 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23344 00:15:48.935 [2024-07-25 03:59:04.109953] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23344: invalid model number 'SPDK_Controller' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:48.935 { 00:15:48.935 "nqn": "nqn.2016-06.io.spdk:cnode23344", 00:15:48.935 "model_number": "SPDK_Controller\u001f", 00:15:48.935 "method": "nvmf_create_subsystem", 00:15:48.935 "req_id": 1 00:15:48.935 } 00:15:48.935 Got JSON-RPC error response 00:15:48.935 response: 00:15:48.935 { 00:15:48.935 "code": -32602, 00:15:48.935 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.935 }' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:48.935 { 00:15:48.935 "nqn": "nqn.2016-06.io.spdk:cnode23344", 00:15:48.935 "model_number": "SPDK_Controller\u001f", 00:15:48.935 "method": "nvmf_create_subsystem", 00:15:48.935 "req_id": 1 00:15:48.935 } 00:15:48.935 Got JSON-RPC error response 00:15:48.935 response: 00:15:48.935 { 00:15:48.935 "code": -32602, 00:15:48.935 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.935 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:48.935 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 've9t%#PB.CgAV(u^iE#?G' 00:15:48.936 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 've9t%#PB.CgAV(u^iE#?G' nqn.2016-06.io.spdk:cnode3188 00:15:49.194 [2024-07-25 03:59:04.435029] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3188: invalid serial number 've9t%#PB.CgAV(u^iE#?G' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:49.194 { 00:15:49.194 "nqn": "nqn.2016-06.io.spdk:cnode3188", 00:15:49.194 "serial_number": "ve9t%#PB.CgAV(u^iE#?G", 00:15:49.194 "method": "nvmf_create_subsystem", 00:15:49.194 "req_id": 1 00:15:49.194 } 00:15:49.194 Got JSON-RPC error response 00:15:49.194 response: 00:15:49.194 { 00:15:49.194 "code": -32602, 00:15:49.194 "message": "Invalid SN ve9t%#PB.CgAV(u^iE#?G" 00:15:49.194 }' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:49.194 { 00:15:49.194 "nqn": "nqn.2016-06.io.spdk:cnode3188", 00:15:49.194 "serial_number": "ve9t%#PB.CgAV(u^iE#?G", 00:15:49.194 "method": "nvmf_create_subsystem", 00:15:49.194 "req_id": 1 00:15:49.194 } 00:15:49.194 Got JSON-RPC error response 00:15:49.194 response: 00:15:49.194 { 00:15:49.194 "code": -32602, 00:15:49.194 "message": "Invalid SN ve9t%#PB.CgAV(u^iE#?G" 00:15:49.194 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.194 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:49.452 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW' 00:15:49.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '@Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW' nqn.2016-06.io.spdk:cnode14630 00:15:49.711 [2024-07-25 03:59:04.836309] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14630: invalid model number '@Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW' 00:15:49.711 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:49.711 { 00:15:49.711 "nqn": "nqn.2016-06.io.spdk:cnode14630", 00:15:49.711 "model_number": "@Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW", 00:15:49.711 "method": "nvmf_create_subsystem", 00:15:49.711 "req_id": 1 00:15:49.711 } 00:15:49.711 Got JSON-RPC error response 00:15:49.711 response: 00:15:49.711 { 00:15:49.711 "code": -32602, 00:15:49.711 "message": "Invalid MN @Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW" 00:15:49.711 }' 00:15:49.711 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:49.711 { 00:15:49.711 "nqn": "nqn.2016-06.io.spdk:cnode14630", 00:15:49.711 "model_number": "@Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW", 00:15:49.711 "method": "nvmf_create_subsystem", 00:15:49.711 "req_id": 1 00:15:49.711 } 00:15:49.711 Got JSON-RPC error response 00:15:49.711 response: 00:15:49.711 { 00:15:49.711 "code": -32602, 00:15:49.711 "message": "Invalid MN @Ohel_=%|P&;Yd5_p4s}YmD98B4Eke6N}fx)Q(EmW" 00:15:49.711 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:49.711 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:49.968 [2024-07-25 03:59:05.089191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.968 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:50.225 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:50.225 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:50.225 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:50.225 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:50.225 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:50.483 [2024-07-25 03:59:05.578771] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:50.483 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:50.483 { 00:15:50.483 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:50.483 "listen_address": { 00:15:50.483 "trtype": "tcp", 00:15:50.483 "traddr": "", 00:15:50.483 "trsvcid": "4421" 00:15:50.483 }, 00:15:50.483 "method": "nvmf_subsystem_remove_listener", 00:15:50.483 "req_id": 1 00:15:50.483 } 00:15:50.483 Got JSON-RPC error response 00:15:50.483 response: 00:15:50.483 { 00:15:50.483 "code": -32602, 00:15:50.483 "message": "Invalid parameters" 00:15:50.483 }' 00:15:50.483 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:50.483 { 00:15:50.483 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:50.483 "listen_address": { 00:15:50.483 "trtype": "tcp", 00:15:50.483 "traddr": "", 00:15:50.483 "trsvcid": "4421" 00:15:50.483 }, 00:15:50.483 "method": "nvmf_subsystem_remove_listener", 00:15:50.483 "req_id": 1 00:15:50.483 } 00:15:50.483 Got JSON-RPC error response 00:15:50.483 response: 00:15:50.483 { 00:15:50.483 "code": -32602, 00:15:50.483 "message": "Invalid parameters" 00:15:50.483 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:50.483 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5603 -i 0 00:15:50.740 [2024-07-25 03:59:05.823502] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5603: invalid cntlid range [0-65519] 00:15:50.740 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:50.740 { 00:15:50.740 "nqn": "nqn.2016-06.io.spdk:cnode5603", 00:15:50.740 "min_cntlid": 0, 00:15:50.740 "method": "nvmf_create_subsystem", 00:15:50.740 "req_id": 1 00:15:50.740 } 00:15:50.740 Got JSON-RPC error response 00:15:50.740 response: 00:15:50.740 { 00:15:50.740 "code": -32602, 00:15:50.740 "message": "Invalid cntlid range [0-65519]" 00:15:50.740 }' 00:15:50.740 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:50.740 { 00:15:50.740 "nqn": "nqn.2016-06.io.spdk:cnode5603", 00:15:50.740 "min_cntlid": 0, 00:15:50.740 "method": "nvmf_create_subsystem", 00:15:50.740 "req_id": 1 00:15:50.740 } 00:15:50.740 Got JSON-RPC error response 00:15:50.740 response: 00:15:50.740 { 00:15:50.740 "code": -32602, 00:15:50.740 "message": "Invalid cntlid range [0-65519]" 00:15:50.740 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.740 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13772 -i 65520 00:15:50.997 [2024-07-25 03:59:06.068341] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13772: invalid cntlid range [65520-65519] 00:15:50.997 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:50.997 { 00:15:50.997 "nqn": "nqn.2016-06.io.spdk:cnode13772", 00:15:50.997 "min_cntlid": 65520, 00:15:50.997 "method": "nvmf_create_subsystem", 00:15:50.997 "req_id": 1 00:15:50.997 } 00:15:50.997 Got JSON-RPC error response 00:15:50.997 response: 00:15:50.997 { 00:15:50.997 "code": -32602, 00:15:50.997 "message": "Invalid cntlid range [65520-65519]" 00:15:50.997 }' 00:15:50.997 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:50.997 { 00:15:50.997 "nqn": "nqn.2016-06.io.spdk:cnode13772", 00:15:50.997 "min_cntlid": 65520, 00:15:50.997 "method": "nvmf_create_subsystem", 00:15:50.997 "req_id": 1 00:15:50.997 } 00:15:50.997 Got JSON-RPC error response 00:15:50.997 response: 00:15:50.997 { 00:15:50.997 "code": -32602, 00:15:50.997 "message": "Invalid cntlid range [65520-65519]" 00:15:50.997 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.997 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4760 -I 0 00:15:51.255 [2024-07-25 03:59:06.317163] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4760: invalid cntlid range [1-0] 00:15:51.255 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:51.255 { 00:15:51.255 "nqn": "nqn.2016-06.io.spdk:cnode4760", 00:15:51.255 "max_cntlid": 0, 00:15:51.255 "method": "nvmf_create_subsystem", 00:15:51.255 "req_id": 1 00:15:51.255 } 00:15:51.255 Got JSON-RPC error response 00:15:51.255 response: 00:15:51.255 { 00:15:51.255 "code": -32602, 00:15:51.255 "message": "Invalid cntlid range [1-0]" 00:15:51.255 }' 00:15:51.255 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:51.255 { 00:15:51.255 "nqn": "nqn.2016-06.io.spdk:cnode4760", 00:15:51.255 "max_cntlid": 0, 00:15:51.255 "method": "nvmf_create_subsystem", 00:15:51.255 "req_id": 1 00:15:51.255 } 00:15:51.255 Got JSON-RPC error response 00:15:51.255 response: 00:15:51.255 { 00:15:51.255 "code": -32602, 00:15:51.255 "message": "Invalid cntlid range [1-0]" 00:15:51.255 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:51.255 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29685 -I 65520 00:15:51.512 [2024-07-25 03:59:06.557926] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29685: invalid cntlid range [1-65520] 00:15:51.512 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:51.512 { 00:15:51.512 "nqn": "nqn.2016-06.io.spdk:cnode29685", 00:15:51.512 "max_cntlid": 65520, 00:15:51.512 "method": "nvmf_create_subsystem", 00:15:51.512 "req_id": 1 00:15:51.512 } 00:15:51.512 Got JSON-RPC error response 00:15:51.512 response: 00:15:51.512 { 00:15:51.512 "code": -32602, 00:15:51.512 "message": "Invalid cntlid range [1-65520]" 00:15:51.512 }' 00:15:51.512 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:51.512 { 00:15:51.512 "nqn": "nqn.2016-06.io.spdk:cnode29685", 00:15:51.512 "max_cntlid": 65520, 00:15:51.512 "method": "nvmf_create_subsystem", 00:15:51.512 "req_id": 1 00:15:51.512 } 00:15:51.512 Got JSON-RPC error response 00:15:51.512 response: 00:15:51.512 { 00:15:51.512 "code": -32602, 00:15:51.512 "message": "Invalid cntlid range [1-65520]" 00:15:51.512 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:51.512 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode194 -i 6 -I 5 00:15:51.770 [2024-07-25 03:59:06.814791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode194: invalid cntlid range [6-5] 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:51.770 { 00:15:51.770 "nqn": "nqn.2016-06.io.spdk:cnode194", 00:15:51.770 "min_cntlid": 6, 00:15:51.770 "max_cntlid": 5, 00:15:51.770 "method": "nvmf_create_subsystem", 00:15:51.770 "req_id": 1 00:15:51.770 } 00:15:51.770 Got JSON-RPC error response 00:15:51.770 response: 00:15:51.770 { 00:15:51.770 "code": -32602, 00:15:51.770 "message": "Invalid cntlid range [6-5]" 00:15:51.770 }' 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:51.770 { 00:15:51.770 "nqn": "nqn.2016-06.io.spdk:cnode194", 00:15:51.770 "min_cntlid": 6, 00:15:51.770 "max_cntlid": 5, 00:15:51.770 "method": "nvmf_create_subsystem", 00:15:51.770 "req_id": 1 00:15:51.770 } 00:15:51.770 Got JSON-RPC error response 00:15:51.770 response: 00:15:51.770 { 00:15:51.770 "code": -32602, 00:15:51.770 "message": "Invalid cntlid range [6-5]" 00:15:51.770 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:51.770 { 00:15:51.770 "name": "foobar", 00:15:51.770 "method": "nvmf_delete_target", 00:15:51.770 "req_id": 1 00:15:51.770 } 00:15:51.770 Got JSON-RPC error response 00:15:51.770 response: 00:15:51.770 { 00:15:51.770 "code": -32602, 00:15:51.770 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:51.770 }' 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:51.770 { 00:15:51.770 "name": "foobar", 00:15:51.770 "method": "nvmf_delete_target", 00:15:51.770 "req_id": 1 00:15:51.770 } 00:15:51.770 Got JSON-RPC error response 00:15:51.770 response: 00:15:51.770 { 00:15:51.770 "code": -32602, 00:15:51.770 "message": "The specified target doesn't exist, cannot delete it." 00:15:51.770 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.770 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.770 rmmod nvme_tcp 00:15:51.770 rmmod nvme_fabrics 00:15:51.770 rmmod nvme_keyring 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 808297 ']' 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 808297 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 808297 ']' 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 808297 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 808297 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 808297' 00:15:51.770 killing process with pid 808297 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 808297 00:15:51.770 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 808297 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.028 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.558 00:15:54.558 real 0m8.455s 00:15:54.558 user 0m19.560s 00:15:54.558 sys 0m2.361s 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:54.558 ************************************ 00:15:54.558 END TEST nvmf_invalid 00:15:54.558 ************************************ 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.558 ************************************ 00:15:54.558 START TEST nvmf_connect_stress 00:15:54.558 ************************************ 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:54.558 * Looking for test storage... 00:15:54.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.558 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.559 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.460 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:56.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:56.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:56.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:56.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:15:56.461 00:15:56.461 --- 10.0.0.2 ping statistics --- 00:15:56.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.461 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:15:56.461 00:15:56.461 --- 10.0.0.1 ping statistics --- 00:15:56.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.461 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=810925 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 810925 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 810925 ']' 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.461 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.461 [2024-07-25 03:59:11.553837] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:15:56.461 [2024-07-25 03:59:11.553920] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.461 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.462 [2024-07-25 03:59:11.590742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.462 [2024-07-25 03:59:11.621941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:56.462 [2024-07-25 03:59:11.713847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.462 [2024-07-25 03:59:11.713911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.462 [2024-07-25 03:59:11.713928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.462 [2024-07-25 03:59:11.713942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.462 [2024-07-25 03:59:11.713953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.462 [2024-07-25 03:59:11.714045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.462 [2024-07-25 03:59:11.714099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.462 [2024-07-25 03:59:11.714102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.720 [2024-07-25 03:59:11.856247] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.720 [2024-07-25 03:59:11.884449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.720 NULL1 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=810952 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.720 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.721 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.978 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.978 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:56.978 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.978 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.978 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.544 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.544 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:57.544 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.544 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.544 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.836 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.836 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:57.836 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.836 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.836 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.094 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.094 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:58.094 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.094 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.094 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.351 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.351 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:58.351 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.351 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.351 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.607 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.607 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:58.607 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.607 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.607 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.171 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.171 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:59.171 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.171 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.171 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.428 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.428 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:59.428 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.428 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.428 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.685 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.685 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:59.685 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.685 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.685 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:15:59.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.198 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.198 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:00.198 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.198 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.198 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.763 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.763 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:00.763 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.763 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.763 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.020 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.020 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:01.021 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.021 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.021 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.278 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.278 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:01.278 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.278 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.278 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.535 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.535 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:01.535 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.535 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.535 03:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.793 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.793 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:01.793 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.793 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.793 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.358 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.358 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:02.358 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.358 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.358 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.616 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.616 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:02.616 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.616 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.616 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.873 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.873 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:02.873 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.873 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.873 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.130 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:03.130 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.130 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.130 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.694 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.694 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:03.694 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.694 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.694 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.951 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.951 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:03.951 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.951 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.951 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.208 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.208 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:04.208 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.208 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.208 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.465 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.465 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:04.465 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.465 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.465 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.722 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:04.722 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.722 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.722 03:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.286 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.286 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:05.286 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.286 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.286 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.543 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.543 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:05.543 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.543 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.543 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.800 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.800 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:05.800 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.800 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.800 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.058 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:06.058 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.058 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.058 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.315 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.315 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:06.315 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.315 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.315 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.879 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.879 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:06.879 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.879 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.879 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.879 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 810952 00:16:07.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (810952) - No such process 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 810952 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.138 rmmod nvme_tcp 00:16:07.138 rmmod nvme_fabrics 00:16:07.138 rmmod nvme_keyring 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 810925 ']' 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 810925 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 810925 ']' 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 810925 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810925 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810925' 00:16:07.138 killing process with pid 810925 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 810925 00:16:07.138 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 810925 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.396 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.295 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:09.295 00:16:09.295 real 0m15.228s 00:16:09.295 user 0m38.077s 00:16:09.295 sys 0m5.955s 00:16:09.295 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.295 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.295 ************************************ 00:16:09.295 END TEST nvmf_connect_stress 00:16:09.295 ************************************ 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.553 ************************************ 00:16:09.553 START TEST nvmf_fused_ordering 00:16:09.553 ************************************ 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:09.553 * Looking for test storage... 00:16:09.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.553 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.554 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:11.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:11.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:11.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:11.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:11.453 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:11.454 00:16:11.454 --- 10.0.0.2 ping statistics --- 00:16:11.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.454 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:16:11.454 00:16:11.454 --- 10.0.0.1 ping statistics --- 00:16:11.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.454 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=814089 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 814089 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 814089 ']' 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.454 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.712 [2024-07-25 03:59:26.793751] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:11.712 [2024-07-25 03:59:26.793841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.712 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.712 [2024-07-25 03:59:26.831830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:11.712 [2024-07-25 03:59:26.859740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.712 [2024-07-25 03:59:26.950885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.712 [2024-07-25 03:59:26.950941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.712 [2024-07-25 03:59:26.950955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.712 [2024-07-25 03:59:26.950975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.712 [2024-07-25 03:59:26.950985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.712 [2024-07-25 03:59:26.951014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.969 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.969 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:11.969 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 [2024-07-25 03:59:27.096133] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 [2024-07-25 03:59:27.112386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 NULL1 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:11.970 [2024-07-25 03:59:27.157683] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:11.970 [2024-07-25 03:59:27.157725] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814114 ] 00:16:11.970 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.970 [2024-07-25 03:59:27.190873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:12.544 Attached to nqn.2016-06.io.spdk:cnode1 00:16:12.544 Namespace ID: 1 size: 1GB 00:16:12.544 fused_ordering(0) 00:16:12.544 fused_ordering(1) 00:16:12.544 fused_ordering(2) 00:16:12.544 fused_ordering(3) 00:16:12.544 fused_ordering(4) 00:16:12.544 fused_ordering(5) 00:16:12.544 fused_ordering(6) 00:16:12.544 fused_ordering(7) 00:16:12.544 fused_ordering(8) 00:16:12.544 fused_ordering(9) 00:16:12.544 fused_ordering(10) 00:16:12.544 fused_ordering(11) 00:16:12.544 fused_ordering(12) 00:16:12.544 fused_ordering(13) 00:16:12.544 fused_ordering(14) 00:16:12.544 fused_ordering(15) 00:16:12.544 fused_ordering(16) 00:16:12.544 fused_ordering(17) 00:16:12.544 fused_ordering(18) 00:16:12.544 fused_ordering(19) 00:16:12.544 fused_ordering(20) 00:16:12.544 fused_ordering(21) 00:16:12.544 fused_ordering(22) 00:16:12.544 fused_ordering(23) 00:16:12.544 fused_ordering(24) 00:16:12.544 fused_ordering(25) 00:16:12.544 fused_ordering(26) 00:16:12.544 fused_ordering(27) 00:16:12.544 fused_ordering(28) 00:16:12.544 fused_ordering(29) 00:16:12.544 fused_ordering(30) 00:16:12.544 fused_ordering(31) 00:16:12.544 fused_ordering(32) 00:16:12.544 fused_ordering(33) 00:16:12.544 fused_ordering(34) 00:16:12.544 fused_ordering(35) 00:16:12.544 fused_ordering(36) 00:16:12.544 fused_ordering(37) 00:16:12.544 fused_ordering(38) 00:16:12.544 fused_ordering(39) 00:16:12.544 fused_ordering(40) 00:16:12.544 fused_ordering(41) 00:16:12.544 fused_ordering(42) 00:16:12.544 fused_ordering(43) 00:16:12.544 fused_ordering(44) 00:16:12.544 fused_ordering(45) 00:16:12.544 fused_ordering(46) 00:16:12.544 fused_ordering(47) 00:16:12.544 fused_ordering(48) 00:16:12.544 fused_ordering(49) 00:16:12.544 fused_ordering(50) 00:16:12.544 fused_ordering(51) 00:16:12.544 fused_ordering(52) 00:16:12.544 fused_ordering(53) 00:16:12.544 fused_ordering(54) 00:16:12.544 fused_ordering(55) 00:16:12.544 fused_ordering(56) 00:16:12.544 fused_ordering(57) 00:16:12.544 fused_ordering(58) 00:16:12.544 fused_ordering(59) 00:16:12.544 fused_ordering(60) 00:16:12.544 fused_ordering(61) 00:16:12.544 fused_ordering(62) 00:16:12.544 fused_ordering(63) 00:16:12.544 fused_ordering(64) 00:16:12.544 fused_ordering(65) 00:16:12.544 fused_ordering(66) 00:16:12.544 fused_ordering(67) 00:16:12.544 fused_ordering(68) 00:16:12.544 fused_ordering(69) 00:16:12.544 fused_ordering(70) 00:16:12.544 fused_ordering(71) 00:16:12.544 fused_ordering(72) 00:16:12.544 fused_ordering(73) 00:16:12.544 fused_ordering(74) 00:16:12.544 fused_ordering(75) 00:16:12.544 fused_ordering(76) 00:16:12.544 fused_ordering(77) 00:16:12.544 fused_ordering(78) 00:16:12.544 fused_ordering(79) 00:16:12.544 fused_ordering(80) 00:16:12.544 fused_ordering(81) 00:16:12.544 fused_ordering(82) 00:16:12.544 fused_ordering(83) 00:16:12.544 fused_ordering(84) 00:16:12.544 fused_ordering(85) 00:16:12.544 fused_ordering(86) 00:16:12.544 fused_ordering(87) 00:16:12.544 fused_ordering(88) 00:16:12.544 fused_ordering(89) 00:16:12.544 fused_ordering(90) 00:16:12.544 fused_ordering(91) 00:16:12.544 fused_ordering(92) 00:16:12.544 fused_ordering(93) 00:16:12.544 fused_ordering(94) 00:16:12.544 fused_ordering(95) 00:16:12.544 fused_ordering(96) 00:16:12.544 fused_ordering(97) 00:16:12.544 fused_ordering(98) 00:16:12.544 fused_ordering(99) 00:16:12.544 fused_ordering(100) 00:16:12.544 fused_ordering(101) 00:16:12.544 fused_ordering(102) 00:16:12.544 fused_ordering(103) 00:16:12.544 fused_ordering(104) 00:16:12.544 fused_ordering(105) 00:16:12.544 fused_ordering(106) 00:16:12.544 fused_ordering(107) 00:16:12.544 fused_ordering(108) 00:16:12.544 fused_ordering(109) 00:16:12.544 fused_ordering(110) 00:16:12.544 fused_ordering(111) 00:16:12.544 fused_ordering(112) 00:16:12.544 fused_ordering(113) 00:16:12.544 fused_ordering(114) 00:16:12.544 fused_ordering(115) 00:16:12.544 fused_ordering(116) 00:16:12.544 fused_ordering(117) 00:16:12.544 fused_ordering(118) 00:16:12.544 fused_ordering(119) 00:16:12.544 fused_ordering(120) 00:16:12.544 fused_ordering(121) 00:16:12.544 fused_ordering(122) 00:16:12.544 fused_ordering(123) 00:16:12.544 fused_ordering(124) 00:16:12.544 fused_ordering(125) 00:16:12.544 fused_ordering(126) 00:16:12.544 fused_ordering(127) 00:16:12.544 fused_ordering(128) 00:16:12.544 fused_ordering(129) 00:16:12.544 fused_ordering(130) 00:16:12.544 fused_ordering(131) 00:16:12.544 fused_ordering(132) 00:16:12.544 fused_ordering(133) 00:16:12.544 fused_ordering(134) 00:16:12.544 fused_ordering(135) 00:16:12.544 fused_ordering(136) 00:16:12.544 fused_ordering(137) 00:16:12.544 fused_ordering(138) 00:16:12.544 fused_ordering(139) 00:16:12.544 fused_ordering(140) 00:16:12.544 fused_ordering(141) 00:16:12.544 fused_ordering(142) 00:16:12.544 fused_ordering(143) 00:16:12.544 fused_ordering(144) 00:16:12.544 fused_ordering(145) 00:16:12.544 fused_ordering(146) 00:16:12.544 fused_ordering(147) 00:16:12.544 fused_ordering(148) 00:16:12.544 fused_ordering(149) 00:16:12.544 fused_ordering(150) 00:16:12.544 fused_ordering(151) 00:16:12.544 fused_ordering(152) 00:16:12.544 fused_ordering(153) 00:16:12.544 fused_ordering(154) 00:16:12.544 fused_ordering(155) 00:16:12.544 fused_ordering(156) 00:16:12.544 fused_ordering(157) 00:16:12.544 fused_ordering(158) 00:16:12.544 fused_ordering(159) 00:16:12.544 fused_ordering(160) 00:16:12.544 fused_ordering(161) 00:16:12.544 fused_ordering(162) 00:16:12.544 fused_ordering(163) 00:16:12.544 fused_ordering(164) 00:16:12.544 fused_ordering(165) 00:16:12.544 fused_ordering(166) 00:16:12.544 fused_ordering(167) 00:16:12.544 fused_ordering(168) 00:16:12.544 fused_ordering(169) 00:16:12.544 fused_ordering(170) 00:16:12.544 fused_ordering(171) 00:16:12.544 fused_ordering(172) 00:16:12.544 fused_ordering(173) 00:16:12.544 fused_ordering(174) 00:16:12.544 fused_ordering(175) 00:16:12.544 fused_ordering(176) 00:16:12.544 fused_ordering(177) 00:16:12.544 fused_ordering(178) 00:16:12.544 fused_ordering(179) 00:16:12.544 fused_ordering(180) 00:16:12.544 fused_ordering(181) 00:16:12.544 fused_ordering(182) 00:16:12.544 fused_ordering(183) 00:16:12.544 fused_ordering(184) 00:16:12.544 fused_ordering(185) 00:16:12.544 fused_ordering(186) 00:16:12.544 fused_ordering(187) 00:16:12.544 fused_ordering(188) 00:16:12.544 fused_ordering(189) 00:16:12.544 fused_ordering(190) 00:16:12.544 fused_ordering(191) 00:16:12.544 fused_ordering(192) 00:16:12.544 fused_ordering(193) 00:16:12.544 fused_ordering(194) 00:16:12.544 fused_ordering(195) 00:16:12.544 fused_ordering(196) 00:16:12.544 fused_ordering(197) 00:16:12.544 fused_ordering(198) 00:16:12.544 fused_ordering(199) 00:16:12.544 fused_ordering(200) 00:16:12.544 fused_ordering(201) 00:16:12.544 fused_ordering(202) 00:16:12.544 fused_ordering(203) 00:16:12.544 fused_ordering(204) 00:16:12.544 fused_ordering(205) 00:16:13.108 fused_ordering(206) 00:16:13.108 fused_ordering(207) 00:16:13.108 fused_ordering(208) 00:16:13.108 fused_ordering(209) 00:16:13.108 fused_ordering(210) 00:16:13.108 fused_ordering(211) 00:16:13.108 fused_ordering(212) 00:16:13.108 fused_ordering(213) 00:16:13.108 fused_ordering(214) 00:16:13.108 fused_ordering(215) 00:16:13.108 fused_ordering(216) 00:16:13.108 fused_ordering(217) 00:16:13.108 fused_ordering(218) 00:16:13.108 fused_ordering(219) 00:16:13.108 fused_ordering(220) 00:16:13.108 fused_ordering(221) 00:16:13.108 fused_ordering(222) 00:16:13.108 fused_ordering(223) 00:16:13.108 fused_ordering(224) 00:16:13.108 fused_ordering(225) 00:16:13.108 fused_ordering(226) 00:16:13.108 fused_ordering(227) 00:16:13.108 fused_ordering(228) 00:16:13.108 fused_ordering(229) 00:16:13.108 fused_ordering(230) 00:16:13.108 fused_ordering(231) 00:16:13.108 fused_ordering(232) 00:16:13.108 fused_ordering(233) 00:16:13.108 fused_ordering(234) 00:16:13.108 fused_ordering(235) 00:16:13.108 fused_ordering(236) 00:16:13.108 fused_ordering(237) 00:16:13.108 fused_ordering(238) 00:16:13.108 fused_ordering(239) 00:16:13.108 fused_ordering(240) 00:16:13.108 fused_ordering(241) 00:16:13.108 fused_ordering(242) 00:16:13.108 fused_ordering(243) 00:16:13.108 fused_ordering(244) 00:16:13.108 fused_ordering(245) 00:16:13.108 fused_ordering(246) 00:16:13.108 fused_ordering(247) 00:16:13.108 fused_ordering(248) 00:16:13.108 fused_ordering(249) 00:16:13.108 fused_ordering(250) 00:16:13.108 fused_ordering(251) 00:16:13.108 fused_ordering(252) 00:16:13.108 fused_ordering(253) 00:16:13.108 fused_ordering(254) 00:16:13.108 fused_ordering(255) 00:16:13.108 fused_ordering(256) 00:16:13.108 fused_ordering(257) 00:16:13.108 fused_ordering(258) 00:16:13.108 fused_ordering(259) 00:16:13.108 fused_ordering(260) 00:16:13.108 fused_ordering(261) 00:16:13.108 fused_ordering(262) 00:16:13.108 fused_ordering(263) 00:16:13.108 fused_ordering(264) 00:16:13.108 fused_ordering(265) 00:16:13.108 fused_ordering(266) 00:16:13.108 fused_ordering(267) 00:16:13.108 fused_ordering(268) 00:16:13.108 fused_ordering(269) 00:16:13.108 fused_ordering(270) 00:16:13.108 fused_ordering(271) 00:16:13.108 fused_ordering(272) 00:16:13.108 fused_ordering(273) 00:16:13.108 fused_ordering(274) 00:16:13.108 fused_ordering(275) 00:16:13.108 fused_ordering(276) 00:16:13.108 fused_ordering(277) 00:16:13.108 fused_ordering(278) 00:16:13.108 fused_ordering(279) 00:16:13.108 fused_ordering(280) 00:16:13.108 fused_ordering(281) 00:16:13.108 fused_ordering(282) 00:16:13.108 fused_ordering(283) 00:16:13.108 fused_ordering(284) 00:16:13.108 fused_ordering(285) 00:16:13.108 fused_ordering(286) 00:16:13.108 fused_ordering(287) 00:16:13.108 fused_ordering(288) 00:16:13.108 fused_ordering(289) 00:16:13.108 fused_ordering(290) 00:16:13.109 fused_ordering(291) 00:16:13.109 fused_ordering(292) 00:16:13.109 fused_ordering(293) 00:16:13.109 fused_ordering(294) 00:16:13.109 fused_ordering(295) 00:16:13.109 fused_ordering(296) 00:16:13.109 fused_ordering(297) 00:16:13.109 fused_ordering(298) 00:16:13.109 fused_ordering(299) 00:16:13.109 fused_ordering(300) 00:16:13.109 fused_ordering(301) 00:16:13.109 fused_ordering(302) 00:16:13.109 fused_ordering(303) 00:16:13.109 fused_ordering(304) 00:16:13.109 fused_ordering(305) 00:16:13.109 fused_ordering(306) 00:16:13.109 fused_ordering(307) 00:16:13.109 fused_ordering(308) 00:16:13.109 fused_ordering(309) 00:16:13.109 fused_ordering(310) 00:16:13.109 fused_ordering(311) 00:16:13.109 fused_ordering(312) 00:16:13.109 fused_ordering(313) 00:16:13.109 fused_ordering(314) 00:16:13.109 fused_ordering(315) 00:16:13.109 fused_ordering(316) 00:16:13.109 fused_ordering(317) 00:16:13.109 fused_ordering(318) 00:16:13.109 fused_ordering(319) 00:16:13.109 fused_ordering(320) 00:16:13.109 fused_ordering(321) 00:16:13.109 fused_ordering(322) 00:16:13.109 fused_ordering(323) 00:16:13.109 fused_ordering(324) 00:16:13.109 fused_ordering(325) 00:16:13.109 fused_ordering(326) 00:16:13.109 fused_ordering(327) 00:16:13.109 fused_ordering(328) 00:16:13.109 fused_ordering(329) 00:16:13.109 fused_ordering(330) 00:16:13.109 fused_ordering(331) 00:16:13.109 fused_ordering(332) 00:16:13.109 fused_ordering(333) 00:16:13.109 fused_ordering(334) 00:16:13.109 fused_ordering(335) 00:16:13.109 fused_ordering(336) 00:16:13.109 fused_ordering(337) 00:16:13.109 fused_ordering(338) 00:16:13.109 fused_ordering(339) 00:16:13.109 fused_ordering(340) 00:16:13.109 fused_ordering(341) 00:16:13.109 fused_ordering(342) 00:16:13.109 fused_ordering(343) 00:16:13.109 fused_ordering(344) 00:16:13.109 fused_ordering(345) 00:16:13.109 fused_ordering(346) 00:16:13.109 fused_ordering(347) 00:16:13.109 fused_ordering(348) 00:16:13.109 fused_ordering(349) 00:16:13.109 fused_ordering(350) 00:16:13.109 fused_ordering(351) 00:16:13.109 fused_ordering(352) 00:16:13.109 fused_ordering(353) 00:16:13.109 fused_ordering(354) 00:16:13.109 fused_ordering(355) 00:16:13.109 fused_ordering(356) 00:16:13.109 fused_ordering(357) 00:16:13.109 fused_ordering(358) 00:16:13.109 fused_ordering(359) 00:16:13.109 fused_ordering(360) 00:16:13.109 fused_ordering(361) 00:16:13.109 fused_ordering(362) 00:16:13.109 fused_ordering(363) 00:16:13.109 fused_ordering(364) 00:16:13.109 fused_ordering(365) 00:16:13.109 fused_ordering(366) 00:16:13.109 fused_ordering(367) 00:16:13.109 fused_ordering(368) 00:16:13.109 fused_ordering(369) 00:16:13.109 fused_ordering(370) 00:16:13.109 fused_ordering(371) 00:16:13.109 fused_ordering(372) 00:16:13.109 fused_ordering(373) 00:16:13.109 fused_ordering(374) 00:16:13.109 fused_ordering(375) 00:16:13.109 fused_ordering(376) 00:16:13.109 fused_ordering(377) 00:16:13.109 fused_ordering(378) 00:16:13.109 fused_ordering(379) 00:16:13.109 fused_ordering(380) 00:16:13.109 fused_ordering(381) 00:16:13.109 fused_ordering(382) 00:16:13.109 fused_ordering(383) 00:16:13.109 fused_ordering(384) 00:16:13.109 fused_ordering(385) 00:16:13.109 fused_ordering(386) 00:16:13.109 fused_ordering(387) 00:16:13.109 fused_ordering(388) 00:16:13.109 fused_ordering(389) 00:16:13.109 fused_ordering(390) 00:16:13.109 fused_ordering(391) 00:16:13.109 fused_ordering(392) 00:16:13.109 fused_ordering(393) 00:16:13.109 fused_ordering(394) 00:16:13.109 fused_ordering(395) 00:16:13.109 fused_ordering(396) 00:16:13.109 fused_ordering(397) 00:16:13.109 fused_ordering(398) 00:16:13.109 fused_ordering(399) 00:16:13.109 fused_ordering(400) 00:16:13.109 fused_ordering(401) 00:16:13.109 fused_ordering(402) 00:16:13.109 fused_ordering(403) 00:16:13.109 fused_ordering(404) 00:16:13.109 fused_ordering(405) 00:16:13.109 fused_ordering(406) 00:16:13.109 fused_ordering(407) 00:16:13.109 fused_ordering(408) 00:16:13.109 fused_ordering(409) 00:16:13.109 fused_ordering(410) 00:16:13.367 fused_ordering(411) 00:16:13.367 fused_ordering(412) 00:16:13.367 fused_ordering(413) 00:16:13.367 fused_ordering(414) 00:16:13.367 fused_ordering(415) 00:16:13.367 fused_ordering(416) 00:16:13.367 fused_ordering(417) 00:16:13.367 fused_ordering(418) 00:16:13.367 fused_ordering(419) 00:16:13.367 fused_ordering(420) 00:16:13.367 fused_ordering(421) 00:16:13.367 fused_ordering(422) 00:16:13.367 fused_ordering(423) 00:16:13.367 fused_ordering(424) 00:16:13.367 fused_ordering(425) 00:16:13.367 fused_ordering(426) 00:16:13.367 fused_ordering(427) 00:16:13.367 fused_ordering(428) 00:16:13.367 fused_ordering(429) 00:16:13.367 fused_ordering(430) 00:16:13.367 fused_ordering(431) 00:16:13.367 fused_ordering(432) 00:16:13.367 fused_ordering(433) 00:16:13.367 fused_ordering(434) 00:16:13.367 fused_ordering(435) 00:16:13.367 fused_ordering(436) 00:16:13.367 fused_ordering(437) 00:16:13.367 fused_ordering(438) 00:16:13.367 fused_ordering(439) 00:16:13.367 fused_ordering(440) 00:16:13.367 fused_ordering(441) 00:16:13.367 fused_ordering(442) 00:16:13.367 fused_ordering(443) 00:16:13.367 fused_ordering(444) 00:16:13.367 fused_ordering(445) 00:16:13.367 fused_ordering(446) 00:16:13.367 fused_ordering(447) 00:16:13.367 fused_ordering(448) 00:16:13.367 fused_ordering(449) 00:16:13.367 fused_ordering(450) 00:16:13.367 fused_ordering(451) 00:16:13.367 fused_ordering(452) 00:16:13.367 fused_ordering(453) 00:16:13.367 fused_ordering(454) 00:16:13.367 fused_ordering(455) 00:16:13.367 fused_ordering(456) 00:16:13.367 fused_ordering(457) 00:16:13.367 fused_ordering(458) 00:16:13.367 fused_ordering(459) 00:16:13.367 fused_ordering(460) 00:16:13.367 fused_ordering(461) 00:16:13.367 fused_ordering(462) 00:16:13.367 fused_ordering(463) 00:16:13.367 fused_ordering(464) 00:16:13.367 fused_ordering(465) 00:16:13.367 fused_ordering(466) 00:16:13.367 fused_ordering(467) 00:16:13.367 fused_ordering(468) 00:16:13.367 fused_ordering(469) 00:16:13.367 fused_ordering(470) 00:16:13.367 fused_ordering(471) 00:16:13.367 fused_ordering(472) 00:16:13.367 fused_ordering(473) 00:16:13.367 fused_ordering(474) 00:16:13.367 fused_ordering(475) 00:16:13.367 fused_ordering(476) 00:16:13.367 fused_ordering(477) 00:16:13.367 fused_ordering(478) 00:16:13.367 fused_ordering(479) 00:16:13.367 fused_ordering(480) 00:16:13.367 fused_ordering(481) 00:16:13.367 fused_ordering(482) 00:16:13.367 fused_ordering(483) 00:16:13.367 fused_ordering(484) 00:16:13.367 fused_ordering(485) 00:16:13.367 fused_ordering(486) 00:16:13.367 fused_ordering(487) 00:16:13.367 fused_ordering(488) 00:16:13.367 fused_ordering(489) 00:16:13.367 fused_ordering(490) 00:16:13.367 fused_ordering(491) 00:16:13.367 fused_ordering(492) 00:16:13.367 fused_ordering(493) 00:16:13.367 fused_ordering(494) 00:16:13.367 fused_ordering(495) 00:16:13.367 fused_ordering(496) 00:16:13.367 fused_ordering(497) 00:16:13.367 fused_ordering(498) 00:16:13.367 fused_ordering(499) 00:16:13.367 fused_ordering(500) 00:16:13.367 fused_ordering(501) 00:16:13.367 fused_ordering(502) 00:16:13.367 fused_ordering(503) 00:16:13.367 fused_ordering(504) 00:16:13.367 fused_ordering(505) 00:16:13.367 fused_ordering(506) 00:16:13.367 fused_ordering(507) 00:16:13.367 fused_ordering(508) 00:16:13.367 fused_ordering(509) 00:16:13.367 fused_ordering(510) 00:16:13.367 fused_ordering(511) 00:16:13.367 fused_ordering(512) 00:16:13.367 fused_ordering(513) 00:16:13.367 fused_ordering(514) 00:16:13.367 fused_ordering(515) 00:16:13.367 fused_ordering(516) 00:16:13.367 fused_ordering(517) 00:16:13.367 fused_ordering(518) 00:16:13.367 fused_ordering(519) 00:16:13.367 fused_ordering(520) 00:16:13.367 fused_ordering(521) 00:16:13.367 fused_ordering(522) 00:16:13.367 fused_ordering(523) 00:16:13.367 fused_ordering(524) 00:16:13.367 fused_ordering(525) 00:16:13.367 fused_ordering(526) 00:16:13.367 fused_ordering(527) 00:16:13.367 fused_ordering(528) 00:16:13.367 fused_ordering(529) 00:16:13.367 fused_ordering(530) 00:16:13.367 fused_ordering(531) 00:16:13.367 fused_ordering(532) 00:16:13.367 fused_ordering(533) 00:16:13.367 fused_ordering(534) 00:16:13.367 fused_ordering(535) 00:16:13.367 fused_ordering(536) 00:16:13.367 fused_ordering(537) 00:16:13.367 fused_ordering(538) 00:16:13.367 fused_ordering(539) 00:16:13.367 fused_ordering(540) 00:16:13.367 fused_ordering(541) 00:16:13.367 fused_ordering(542) 00:16:13.367 fused_ordering(543) 00:16:13.367 fused_ordering(544) 00:16:13.367 fused_ordering(545) 00:16:13.367 fused_ordering(546) 00:16:13.367 fused_ordering(547) 00:16:13.367 fused_ordering(548) 00:16:13.367 fused_ordering(549) 00:16:13.367 fused_ordering(550) 00:16:13.367 fused_ordering(551) 00:16:13.367 fused_ordering(552) 00:16:13.367 fused_ordering(553) 00:16:13.367 fused_ordering(554) 00:16:13.367 fused_ordering(555) 00:16:13.367 fused_ordering(556) 00:16:13.367 fused_ordering(557) 00:16:13.367 fused_ordering(558) 00:16:13.367 fused_ordering(559) 00:16:13.367 fused_ordering(560) 00:16:13.367 fused_ordering(561) 00:16:13.367 fused_ordering(562) 00:16:13.367 fused_ordering(563) 00:16:13.367 fused_ordering(564) 00:16:13.367 fused_ordering(565) 00:16:13.367 fused_ordering(566) 00:16:13.367 fused_ordering(567) 00:16:13.367 fused_ordering(568) 00:16:13.367 fused_ordering(569) 00:16:13.367 fused_ordering(570) 00:16:13.367 fused_ordering(571) 00:16:13.367 fused_ordering(572) 00:16:13.367 fused_ordering(573) 00:16:13.367 fused_ordering(574) 00:16:13.367 fused_ordering(575) 00:16:13.367 fused_ordering(576) 00:16:13.367 fused_ordering(577) 00:16:13.367 fused_ordering(578) 00:16:13.367 fused_ordering(579) 00:16:13.367 fused_ordering(580) 00:16:13.367 fused_ordering(581) 00:16:13.367 fused_ordering(582) 00:16:13.367 fused_ordering(583) 00:16:13.367 fused_ordering(584) 00:16:13.367 fused_ordering(585) 00:16:13.367 fused_ordering(586) 00:16:13.367 fused_ordering(587) 00:16:13.367 fused_ordering(588) 00:16:13.367 fused_ordering(589) 00:16:13.367 fused_ordering(590) 00:16:13.367 fused_ordering(591) 00:16:13.367 fused_ordering(592) 00:16:13.367 fused_ordering(593) 00:16:13.367 fused_ordering(594) 00:16:13.367 fused_ordering(595) 00:16:13.367 fused_ordering(596) 00:16:13.367 fused_ordering(597) 00:16:13.367 fused_ordering(598) 00:16:13.367 fused_ordering(599) 00:16:13.367 fused_ordering(600) 00:16:13.367 fused_ordering(601) 00:16:13.367 fused_ordering(602) 00:16:13.367 fused_ordering(603) 00:16:13.367 fused_ordering(604) 00:16:13.367 fused_ordering(605) 00:16:13.367 fused_ordering(606) 00:16:13.367 fused_ordering(607) 00:16:13.367 fused_ordering(608) 00:16:13.367 fused_ordering(609) 00:16:13.367 fused_ordering(610) 00:16:13.367 fused_ordering(611) 00:16:13.367 fused_ordering(612) 00:16:13.367 fused_ordering(613) 00:16:13.367 fused_ordering(614) 00:16:13.367 fused_ordering(615) 00:16:14.301 fused_ordering(616) 00:16:14.301 fused_ordering(617) 00:16:14.301 fused_ordering(618) 00:16:14.301 fused_ordering(619) 00:16:14.301 fused_ordering(620) 00:16:14.301 fused_ordering(621) 00:16:14.301 fused_ordering(622) 00:16:14.301 fused_ordering(623) 00:16:14.301 fused_ordering(624) 00:16:14.301 fused_ordering(625) 00:16:14.301 fused_ordering(626) 00:16:14.301 fused_ordering(627) 00:16:14.301 fused_ordering(628) 00:16:14.301 fused_ordering(629) 00:16:14.301 fused_ordering(630) 00:16:14.301 fused_ordering(631) 00:16:14.301 fused_ordering(632) 00:16:14.301 fused_ordering(633) 00:16:14.301 fused_ordering(634) 00:16:14.301 fused_ordering(635) 00:16:14.301 fused_ordering(636) 00:16:14.301 fused_ordering(637) 00:16:14.301 fused_ordering(638) 00:16:14.301 fused_ordering(639) 00:16:14.301 fused_ordering(640) 00:16:14.301 fused_ordering(641) 00:16:14.301 fused_ordering(642) 00:16:14.301 fused_ordering(643) 00:16:14.301 fused_ordering(644) 00:16:14.301 fused_ordering(645) 00:16:14.301 fused_ordering(646) 00:16:14.301 fused_ordering(647) 00:16:14.301 fused_ordering(648) 00:16:14.301 fused_ordering(649) 00:16:14.301 fused_ordering(650) 00:16:14.301 fused_ordering(651) 00:16:14.301 fused_ordering(652) 00:16:14.301 fused_ordering(653) 00:16:14.301 fused_ordering(654) 00:16:14.301 fused_ordering(655) 00:16:14.301 fused_ordering(656) 00:16:14.301 fused_ordering(657) 00:16:14.301 fused_ordering(658) 00:16:14.301 fused_ordering(659) 00:16:14.301 fused_ordering(660) 00:16:14.301 fused_ordering(661) 00:16:14.301 fused_ordering(662) 00:16:14.301 fused_ordering(663) 00:16:14.301 fused_ordering(664) 00:16:14.301 fused_ordering(665) 00:16:14.301 fused_ordering(666) 00:16:14.301 fused_ordering(667) 00:16:14.301 fused_ordering(668) 00:16:14.301 fused_ordering(669) 00:16:14.301 fused_ordering(670) 00:16:14.301 fused_ordering(671) 00:16:14.301 fused_ordering(672) 00:16:14.301 fused_ordering(673) 00:16:14.301 fused_ordering(674) 00:16:14.301 fused_ordering(675) 00:16:14.301 fused_ordering(676) 00:16:14.301 fused_ordering(677) 00:16:14.301 fused_ordering(678) 00:16:14.301 fused_ordering(679) 00:16:14.301 fused_ordering(680) 00:16:14.301 fused_ordering(681) 00:16:14.301 fused_ordering(682) 00:16:14.301 fused_ordering(683) 00:16:14.301 fused_ordering(684) 00:16:14.301 fused_ordering(685) 00:16:14.301 fused_ordering(686) 00:16:14.301 fused_ordering(687) 00:16:14.301 fused_ordering(688) 00:16:14.301 fused_ordering(689) 00:16:14.301 fused_ordering(690) 00:16:14.301 fused_ordering(691) 00:16:14.301 fused_ordering(692) 00:16:14.301 fused_ordering(693) 00:16:14.301 fused_ordering(694) 00:16:14.301 fused_ordering(695) 00:16:14.301 fused_ordering(696) 00:16:14.301 fused_ordering(697) 00:16:14.301 fused_ordering(698) 00:16:14.301 fused_ordering(699) 00:16:14.301 fused_ordering(700) 00:16:14.301 fused_ordering(701) 00:16:14.301 fused_ordering(702) 00:16:14.301 fused_ordering(703) 00:16:14.301 fused_ordering(704) 00:16:14.301 fused_ordering(705) 00:16:14.301 fused_ordering(706) 00:16:14.301 fused_ordering(707) 00:16:14.301 fused_ordering(708) 00:16:14.301 fused_ordering(709) 00:16:14.301 fused_ordering(710) 00:16:14.301 fused_ordering(711) 00:16:14.301 fused_ordering(712) 00:16:14.301 fused_ordering(713) 00:16:14.301 fused_ordering(714) 00:16:14.301 fused_ordering(715) 00:16:14.301 fused_ordering(716) 00:16:14.301 fused_ordering(717) 00:16:14.301 fused_ordering(718) 00:16:14.301 fused_ordering(719) 00:16:14.301 fused_ordering(720) 00:16:14.301 fused_ordering(721) 00:16:14.301 fused_ordering(722) 00:16:14.301 fused_ordering(723) 00:16:14.301 fused_ordering(724) 00:16:14.301 fused_ordering(725) 00:16:14.301 fused_ordering(726) 00:16:14.301 fused_ordering(727) 00:16:14.301 fused_ordering(728) 00:16:14.301 fused_ordering(729) 00:16:14.301 fused_ordering(730) 00:16:14.301 fused_ordering(731) 00:16:14.301 fused_ordering(732) 00:16:14.301 fused_ordering(733) 00:16:14.301 fused_ordering(734) 00:16:14.301 fused_ordering(735) 00:16:14.301 fused_ordering(736) 00:16:14.301 fused_ordering(737) 00:16:14.301 fused_ordering(738) 00:16:14.301 fused_ordering(739) 00:16:14.301 fused_ordering(740) 00:16:14.301 fused_ordering(741) 00:16:14.301 fused_ordering(742) 00:16:14.301 fused_ordering(743) 00:16:14.301 fused_ordering(744) 00:16:14.301 fused_ordering(745) 00:16:14.301 fused_ordering(746) 00:16:14.301 fused_ordering(747) 00:16:14.301 fused_ordering(748) 00:16:14.301 fused_ordering(749) 00:16:14.301 fused_ordering(750) 00:16:14.301 fused_ordering(751) 00:16:14.301 fused_ordering(752) 00:16:14.301 fused_ordering(753) 00:16:14.301 fused_ordering(754) 00:16:14.301 fused_ordering(755) 00:16:14.301 fused_ordering(756) 00:16:14.301 fused_ordering(757) 00:16:14.301 fused_ordering(758) 00:16:14.301 fused_ordering(759) 00:16:14.301 fused_ordering(760) 00:16:14.301 fused_ordering(761) 00:16:14.301 fused_ordering(762) 00:16:14.301 fused_ordering(763) 00:16:14.301 fused_ordering(764) 00:16:14.301 fused_ordering(765) 00:16:14.301 fused_ordering(766) 00:16:14.301 fused_ordering(767) 00:16:14.301 fused_ordering(768) 00:16:14.301 fused_ordering(769) 00:16:14.301 fused_ordering(770) 00:16:14.301 fused_ordering(771) 00:16:14.301 fused_ordering(772) 00:16:14.301 fused_ordering(773) 00:16:14.301 fused_ordering(774) 00:16:14.301 fused_ordering(775) 00:16:14.301 fused_ordering(776) 00:16:14.301 fused_ordering(777) 00:16:14.301 fused_ordering(778) 00:16:14.301 fused_ordering(779) 00:16:14.301 fused_ordering(780) 00:16:14.301 fused_ordering(781) 00:16:14.301 fused_ordering(782) 00:16:14.301 fused_ordering(783) 00:16:14.302 fused_ordering(784) 00:16:14.302 fused_ordering(785) 00:16:14.302 fused_ordering(786) 00:16:14.302 fused_ordering(787) 00:16:14.302 fused_ordering(788) 00:16:14.302 fused_ordering(789) 00:16:14.302 fused_ordering(790) 00:16:14.302 fused_ordering(791) 00:16:14.302 fused_ordering(792) 00:16:14.302 fused_ordering(793) 00:16:14.302 fused_ordering(794) 00:16:14.302 fused_ordering(795) 00:16:14.302 fused_ordering(796) 00:16:14.302 fused_ordering(797) 00:16:14.302 fused_ordering(798) 00:16:14.302 fused_ordering(799) 00:16:14.302 fused_ordering(800) 00:16:14.302 fused_ordering(801) 00:16:14.302 fused_ordering(802) 00:16:14.302 fused_ordering(803) 00:16:14.302 fused_ordering(804) 00:16:14.302 fused_ordering(805) 00:16:14.302 fused_ordering(806) 00:16:14.302 fused_ordering(807) 00:16:14.302 fused_ordering(808) 00:16:14.302 fused_ordering(809) 00:16:14.302 fused_ordering(810) 00:16:14.302 fused_ordering(811) 00:16:14.302 fused_ordering(812) 00:16:14.302 fused_ordering(813) 00:16:14.302 fused_ordering(814) 00:16:14.302 fused_ordering(815) 00:16:14.302 fused_ordering(816) 00:16:14.302 fused_ordering(817) 00:16:14.302 fused_ordering(818) 00:16:14.302 fused_ordering(819) 00:16:14.302 fused_ordering(820) 00:16:14.868 fused_ordering(821) 00:16:14.868 fused_ordering(822) 00:16:14.868 fused_ordering(823) 00:16:14.868 fused_ordering(824) 00:16:14.868 fused_ordering(825) 00:16:14.868 fused_ordering(826) 00:16:14.868 fused_ordering(827) 00:16:14.868 fused_ordering(828) 00:16:14.868 fused_ordering(829) 00:16:14.868 fused_ordering(830) 00:16:14.868 fused_ordering(831) 00:16:14.868 fused_ordering(832) 00:16:14.868 fused_ordering(833) 00:16:14.868 fused_ordering(834) 00:16:14.868 fused_ordering(835) 00:16:14.868 fused_ordering(836) 00:16:14.868 fused_ordering(837) 00:16:14.868 fused_ordering(838) 00:16:14.868 fused_ordering(839) 00:16:14.868 fused_ordering(840) 00:16:14.868 fused_ordering(841) 00:16:14.868 fused_ordering(842) 00:16:14.868 fused_ordering(843) 00:16:14.868 fused_ordering(844) 00:16:14.868 fused_ordering(845) 00:16:14.868 fused_ordering(846) 00:16:14.868 fused_ordering(847) 00:16:14.868 fused_ordering(848) 00:16:14.868 fused_ordering(849) 00:16:14.868 fused_ordering(850) 00:16:14.868 fused_ordering(851) 00:16:14.868 fused_ordering(852) 00:16:14.868 fused_ordering(853) 00:16:14.868 fused_ordering(854) 00:16:14.868 fused_ordering(855) 00:16:14.868 fused_ordering(856) 00:16:14.868 fused_ordering(857) 00:16:14.868 fused_ordering(858) 00:16:14.868 fused_ordering(859) 00:16:14.868 fused_ordering(860) 00:16:14.868 fused_ordering(861) 00:16:14.868 fused_ordering(862) 00:16:14.868 fused_ordering(863) 00:16:14.868 fused_ordering(864) 00:16:14.868 fused_ordering(865) 00:16:14.868 fused_ordering(866) 00:16:14.868 fused_ordering(867) 00:16:14.868 fused_ordering(868) 00:16:14.868 fused_ordering(869) 00:16:14.868 fused_ordering(870) 00:16:14.868 fused_ordering(871) 00:16:14.868 fused_ordering(872) 00:16:14.868 fused_ordering(873) 00:16:14.868 fused_ordering(874) 00:16:14.868 fused_ordering(875) 00:16:14.868 fused_ordering(876) 00:16:14.868 fused_ordering(877) 00:16:14.868 fused_ordering(878) 00:16:14.868 fused_ordering(879) 00:16:14.868 fused_ordering(880) 00:16:14.868 fused_ordering(881) 00:16:14.868 fused_ordering(882) 00:16:14.868 fused_ordering(883) 00:16:14.868 fused_ordering(884) 00:16:14.868 fused_ordering(885) 00:16:14.868 fused_ordering(886) 00:16:14.868 fused_ordering(887) 00:16:14.868 fused_ordering(888) 00:16:14.868 fused_ordering(889) 00:16:14.868 fused_ordering(890) 00:16:14.868 fused_ordering(891) 00:16:14.868 fused_ordering(892) 00:16:14.868 fused_ordering(893) 00:16:14.868 fused_ordering(894) 00:16:14.868 fused_ordering(895) 00:16:14.868 fused_ordering(896) 00:16:14.868 fused_ordering(897) 00:16:14.868 fused_ordering(898) 00:16:14.868 fused_ordering(899) 00:16:14.868 fused_ordering(900) 00:16:14.868 fused_ordering(901) 00:16:14.868 fused_ordering(902) 00:16:14.868 fused_ordering(903) 00:16:14.868 fused_ordering(904) 00:16:14.868 fused_ordering(905) 00:16:14.868 fused_ordering(906) 00:16:14.868 fused_ordering(907) 00:16:14.868 fused_ordering(908) 00:16:14.868 fused_ordering(909) 00:16:14.868 fused_ordering(910) 00:16:14.868 fused_ordering(911) 00:16:14.868 fused_ordering(912) 00:16:14.868 fused_ordering(913) 00:16:14.868 fused_ordering(914) 00:16:14.868 fused_ordering(915) 00:16:14.868 fused_ordering(916) 00:16:14.868 fused_ordering(917) 00:16:14.868 fused_ordering(918) 00:16:14.868 fused_ordering(919) 00:16:14.868 fused_ordering(920) 00:16:14.868 fused_ordering(921) 00:16:14.868 fused_ordering(922) 00:16:14.868 fused_ordering(923) 00:16:14.868 fused_ordering(924) 00:16:14.868 fused_ordering(925) 00:16:14.868 fused_ordering(926) 00:16:14.868 fused_ordering(927) 00:16:14.868 fused_ordering(928) 00:16:14.868 fused_ordering(929) 00:16:14.868 fused_ordering(930) 00:16:14.868 fused_ordering(931) 00:16:14.868 fused_ordering(932) 00:16:14.868 fused_ordering(933) 00:16:14.868 fused_ordering(934) 00:16:14.868 fused_ordering(935) 00:16:14.868 fused_ordering(936) 00:16:14.868 fused_ordering(937) 00:16:14.868 fused_ordering(938) 00:16:14.868 fused_ordering(939) 00:16:14.868 fused_ordering(940) 00:16:14.868 fused_ordering(941) 00:16:14.868 fused_ordering(942) 00:16:14.868 fused_ordering(943) 00:16:14.868 fused_ordering(944) 00:16:14.868 fused_ordering(945) 00:16:14.868 fused_ordering(946) 00:16:14.868 fused_ordering(947) 00:16:14.868 fused_ordering(948) 00:16:14.868 fused_ordering(949) 00:16:14.868 fused_ordering(950) 00:16:14.868 fused_ordering(951) 00:16:14.868 fused_ordering(952) 00:16:14.868 fused_ordering(953) 00:16:14.868 fused_ordering(954) 00:16:14.868 fused_ordering(955) 00:16:14.868 fused_ordering(956) 00:16:14.868 fused_ordering(957) 00:16:14.868 fused_ordering(958) 00:16:14.868 fused_ordering(959) 00:16:14.868 fused_ordering(960) 00:16:14.868 fused_ordering(961) 00:16:14.868 fused_ordering(962) 00:16:14.868 fused_ordering(963) 00:16:14.868 fused_ordering(964) 00:16:14.868 fused_ordering(965) 00:16:14.868 fused_ordering(966) 00:16:14.868 fused_ordering(967) 00:16:14.868 fused_ordering(968) 00:16:14.868 fused_ordering(969) 00:16:14.868 fused_ordering(970) 00:16:14.868 fused_ordering(971) 00:16:14.868 fused_ordering(972) 00:16:14.868 fused_ordering(973) 00:16:14.868 fused_ordering(974) 00:16:14.868 fused_ordering(975) 00:16:14.868 fused_ordering(976) 00:16:14.868 fused_ordering(977) 00:16:14.868 fused_ordering(978) 00:16:14.868 fused_ordering(979) 00:16:14.868 fused_ordering(980) 00:16:14.868 fused_ordering(981) 00:16:14.868 fused_ordering(982) 00:16:14.868 fused_ordering(983) 00:16:14.868 fused_ordering(984) 00:16:14.868 fused_ordering(985) 00:16:14.868 fused_ordering(986) 00:16:14.868 fused_ordering(987) 00:16:14.868 fused_ordering(988) 00:16:14.868 fused_ordering(989) 00:16:14.868 fused_ordering(990) 00:16:14.868 fused_ordering(991) 00:16:14.868 fused_ordering(992) 00:16:14.868 fused_ordering(993) 00:16:14.868 fused_ordering(994) 00:16:14.868 fused_ordering(995) 00:16:14.868 fused_ordering(996) 00:16:14.868 fused_ordering(997) 00:16:14.868 fused_ordering(998) 00:16:14.868 fused_ordering(999) 00:16:14.868 fused_ordering(1000) 00:16:14.868 fused_ordering(1001) 00:16:14.868 fused_ordering(1002) 00:16:14.868 fused_ordering(1003) 00:16:14.868 fused_ordering(1004) 00:16:14.868 fused_ordering(1005) 00:16:14.868 fused_ordering(1006) 00:16:14.868 fused_ordering(1007) 00:16:14.868 fused_ordering(1008) 00:16:14.868 fused_ordering(1009) 00:16:14.868 fused_ordering(1010) 00:16:14.868 fused_ordering(1011) 00:16:14.868 fused_ordering(1012) 00:16:14.868 fused_ordering(1013) 00:16:14.868 fused_ordering(1014) 00:16:14.868 fused_ordering(1015) 00:16:14.868 fused_ordering(1016) 00:16:14.868 fused_ordering(1017) 00:16:14.868 fused_ordering(1018) 00:16:14.868 fused_ordering(1019) 00:16:14.868 fused_ordering(1020) 00:16:14.868 fused_ordering(1021) 00:16:14.868 fused_ordering(1022) 00:16:14.868 fused_ordering(1023) 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.868 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.868 rmmod nvme_tcp 00:16:14.868 rmmod nvme_fabrics 00:16:14.868 rmmod nvme_keyring 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 814089 ']' 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 814089 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 814089 ']' 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 814089 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.869 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814089 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814089' 00:16:15.128 killing process with pid 814089 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 814089 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 814089 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.128 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.687 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.687 00:16:17.687 real 0m7.806s 00:16:17.687 user 0m5.575s 00:16:17.687 sys 0m3.450s 00:16:17.687 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.687 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.687 ************************************ 00:16:17.687 END TEST nvmf_fused_ordering 00:16:17.688 ************************************ 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 ************************************ 00:16:17.688 START TEST nvmf_ns_masking 00:16:17.688 ************************************ 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.688 * Looking for test storage... 00:16:17.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1d7c1c3a-3bf8-41b8-8951-cc1d773418ba 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=cc99ced0-4d9f-451f-9f62-17a9115da94b 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=beb82a90-3940-49dc-b530-8b3766a17e7f 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.688 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:19.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:19.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:19.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:19.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.596 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:16:19.597 00:16:19.597 --- 10.0.0.2 ping statistics --- 00:16:19.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.597 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:16:19.597 00:16:19.597 --- 10.0.0.1 ping statistics --- 00:16:19.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.597 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=816439 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 816439 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 816439 ']' 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.597 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 [2024-07-25 03:59:34.696055] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:19.597 [2024-07-25 03:59:34.696134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.597 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.597 [2024-07-25 03:59:34.735660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.597 [2024-07-25 03:59:34.763398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.597 [2024-07-25 03:59:34.852078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.597 [2024-07-25 03:59:34.852139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.597 [2024-07-25 03:59:34.852152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.597 [2024-07-25 03:59:34.852163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.597 [2024-07-25 03:59:34.852173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.597 [2024-07-25 03:59:34.852199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.854 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:20.112 [2024-07-25 03:59:35.258355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.112 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:20.112 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:20.112 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:20.369 Malloc1 00:16:20.369 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:20.626 Malloc2 00:16:20.626 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.884 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:21.141 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.398 [2024-07-25 03:59:36.623124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.398 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:21.398 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I beb82a90-3940-49dc-b530-8b3766a17e7f -a 10.0.0.2 -s 4420 -i 4 00:16:21.656 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.656 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.656 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.656 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:21.656 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:23.555 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:23.813 [ 0]:0x1 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64a8ae3ec0f54e2fa3d68d6c961bcb24 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64a8ae3ec0f54e2fa3d68d6c961bcb24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:23.813 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:24.071 [ 0]:0x1 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64a8ae3ec0f54e2fa3d68d6c961bcb24 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64a8ae3ec0f54e2fa3d68d6c961bcb24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:24.071 [ 1]:0x2 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:24.071 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.329 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.587 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:24.845 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:24.845 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I beb82a90-3940-49dc-b530-8b3766a17e7f -a 10.0.0.2 -s 4420 -i 4 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:24.845 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.743 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.743 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.743 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:27.001 [ 0]:0x2 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.001 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:27.568 [ 0]:0x1 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64a8ae3ec0f54e2fa3d68d6c961bcb24 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64a8ae3ec0f54e2fa3d68d6c961bcb24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:27.568 [ 1]:0x2 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.568 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:27.826 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:27.826 [ 0]:0x2 00:16:27.826 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:27.826 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:27.827 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:27.827 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:27.827 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:27.827 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.084 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:28.341 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:28.341 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I beb82a90-3940-49dc-b530-8b3766a17e7f -a 10.0.0.2 -s 4420 -i 4 00:16:28.341 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:28.342 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.342 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.342 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:28.342 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:28.342 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:30.864 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.865 [ 0]:0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64a8ae3ec0f54e2fa3d68d6c961bcb24 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64a8ae3ec0f54e2fa3d68d6c961bcb24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.865 [ 1]:0x2 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.865 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.865 [ 0]:0x2 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:30.865 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.123 [2024-07-25 03:59:46.364503] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:31.123 request: 00:16:31.123 { 00:16:31.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.123 "nsid": 2, 00:16:31.123 "host": "nqn.2016-06.io.spdk:host1", 00:16:31.123 "method": "nvmf_ns_remove_host", 00:16:31.123 "req_id": 1 00:16:31.123 } 00:16:31.123 Got JSON-RPC error response 00:16:31.123 response: 00:16:31.123 { 00:16:31.123 "code": -32602, 00:16:31.123 "message": "Invalid parameters" 00:16:31.123 } 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.123 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.380 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.380 [ 0]:0x2 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d6f7a65f93841afb54b24b3b54606c7 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d6f7a65f93841afb54b24b3b54606c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:31.381 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=818059 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 818059 /var/tmp/host.sock 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 818059 ']' 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.639 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:31.639 [2024-07-25 03:59:46.747618] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:31.639 [2024-07-25 03:59:46.747709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818059 ] 00:16:31.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.639 [2024-07-25 03:59:46.781151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.639 [2024-07-25 03:59:46.808782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.639 [2024-07-25 03:59:46.894480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.896 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.896 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:31.896 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.154 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:32.411 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1d7c1c3a-3bf8-41b8-8951-cc1d773418ba 00:16:32.411 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:32.411 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1D7C1C3A3BF841B88951CC1D773418BA -i 00:16:32.669 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid cc99ced0-4d9f-451f-9f62-17a9115da94b 00:16:32.669 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:32.669 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CC99CED04D9F451F9F6217A9115DA94B -i 00:16:32.926 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:33.184 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:33.441 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:33.441 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:33.698 nvme0n1 00:16:33.698 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:33.698 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:34.263 nvme1n2 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:34.263 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:34.521 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1d7c1c3a-3bf8-41b8-8951-cc1d773418ba == \1\d\7\c\1\c\3\a\-\3\b\f\8\-\4\1\b\8\-\8\9\5\1\-\c\c\1\d\7\7\3\4\1\8\b\a ]] 00:16:34.521 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:34.521 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:34.521 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ cc99ced0-4d9f-451f-9f62-17a9115da94b == \c\c\9\9\c\e\d\0\-\4\d\9\f\-\4\5\1\f\-\9\f\6\2\-\1\7\a\9\1\1\5\d\a\9\4\b ]] 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 818059 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 818059 ']' 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 818059 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.779 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 818059 00:16:35.037 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:35.037 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:35.037 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 818059' 00:16:35.037 killing process with pid 818059 00:16:35.037 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 818059 00:16:35.037 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 818059 00:16:35.294 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.557 rmmod nvme_tcp 00:16:35.557 rmmod nvme_fabrics 00:16:35.557 rmmod nvme_keyring 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 816439 ']' 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 816439 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 816439 ']' 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 816439 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.557 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 816439 00:16:35.859 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.859 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.859 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 816439' 00:16:35.859 killing process with pid 816439 00:16:35.859 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 816439 00:16:35.859 03:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 816439 00:16:36.118 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.118 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.118 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.119 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.119 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.119 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.119 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.119 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.023 00:16:38.023 real 0m20.704s 00:16:38.023 user 0m26.867s 00:16:38.023 sys 0m4.057s 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:38.023 ************************************ 00:16:38.023 END TEST nvmf_ns_masking 00:16:38.023 ************************************ 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.023 ************************************ 00:16:38.023 START TEST nvmf_nvme_cli 00:16:38.023 ************************************ 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:38.023 * Looking for test storage... 00:16:38.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.023 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:16:40.552 00:16:40.552 --- 10.0.0.2 ping statistics --- 00:16:40.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.552 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:40.552 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:40.553 00:16:40.553 --- 10.0.0.1 ping statistics --- 00:16:40.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.553 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=820427 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 820427 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 820427 ']' 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 [2024-07-25 03:59:55.458580] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:40.553 [2024-07-25 03:59:55.458664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.553 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.553 [2024-07-25 03:59:55.496032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:40.553 [2024-07-25 03:59:55.528479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.553 [2024-07-25 03:59:55.624666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.553 [2024-07-25 03:59:55.624734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.553 [2024-07-25 03:59:55.624750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.553 [2024-07-25 03:59:55.624764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.553 [2024-07-25 03:59:55.624776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.553 [2024-07-25 03:59:55.624862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.553 [2024-07-25 03:59:55.624918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.553 [2024-07-25 03:59:55.624969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.553 [2024-07-25 03:59:55.624972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 [2024-07-25 03:59:55.781865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 Malloc0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 Malloc1 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.553 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.811 [2024-07-25 03:59:55.867655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.811 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:40.811 00:16:40.811 Discovery Log Number of Records 2, Generation counter 2 00:16:40.811 =====Discovery Log Entry 0====== 00:16:40.811 trtype: tcp 00:16:40.811 adrfam: ipv4 00:16:40.811 subtype: current discovery subsystem 00:16:40.811 treq: not required 00:16:40.811 portid: 0 00:16:40.811 trsvcid: 4420 00:16:40.811 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:40.811 traddr: 10.0.0.2 00:16:40.811 eflags: explicit discovery connections, duplicate discovery information 00:16:40.811 sectype: none 00:16:40.811 =====Discovery Log Entry 1====== 00:16:40.811 trtype: tcp 00:16:40.811 adrfam: ipv4 00:16:40.811 subtype: nvme subsystem 00:16:40.811 treq: not required 00:16:40.811 portid: 0 00:16:40.811 trsvcid: 4420 00:16:40.811 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:40.811 traddr: 10.0.0.2 00:16:40.811 eflags: none 00:16:40.811 sectype: none 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:40.811 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:41.744 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:43.642 /dev/nvme0n1 ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.642 rmmod nvme_tcp 00:16:43.642 rmmod nvme_fabrics 00:16:43.642 rmmod nvme_keyring 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 820427 ']' 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 820427 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 820427 ']' 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 820427 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 820427 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 820427' 00:16:43.642 killing process with pid 820427 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 820427 00:16:43.642 03:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 820427 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.208 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.108 00:16:46.108 real 0m7.999s 00:16:46.108 user 0m14.635s 00:16:46.108 sys 0m2.146s 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 ************************************ 00:16:46.108 END TEST nvmf_nvme_cli 00:16:46.108 ************************************ 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 ************************************ 00:16:46.108 START TEST nvmf_vfio_user 00:16:46.108 ************************************ 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:46.108 * Looking for test storage... 00:16:46.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.108 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=821431 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 821431' 00:16:46.109 Process pid: 821431 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 821431 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 821431 ']' 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.109 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:46.109 [2024-07-25 04:00:01.401061] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:46.109 [2024-07-25 04:00:01.401164] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.365 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.365 [2024-07-25 04:00:01.439184] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:46.365 [2024-07-25 04:00:01.469064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.365 [2024-07-25 04:00:01.564214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.365 [2024-07-25 04:00:01.564292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.365 [2024-07-25 04:00:01.564310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.365 [2024-07-25 04:00:01.564338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.365 [2024-07-25 04:00:01.564349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.365 [2024-07-25 04:00:01.564425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.365 [2024-07-25 04:00:01.564478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.365 [2024-07-25 04:00:01.564593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.365 [2024-07-25 04:00:01.564595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.624 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.624 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:46.624 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:47.554 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:47.811 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:47.811 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:47.811 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:47.811 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:47.811 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:48.067 Malloc1 00:16:48.067 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:48.323 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:48.579 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:48.834 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:48.834 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:48.834 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:49.090 Malloc2 00:16:49.091 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:49.347 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:49.604 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:49.862 04:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:49.862 [2024-07-25 04:00:04.984159] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:49.862 [2024-07-25 04:00:04.984205] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821881 ] 00:16:49.862 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.862 [2024-07-25 04:00:05.000984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:49.862 [2024-07-25 04:00:05.018460] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:49.862 [2024-07-25 04:00:05.027713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:49.862 [2024-07-25 04:00:05.027745] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa615a96000 00:16:49.862 [2024-07-25 04:00:05.028703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.029697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.030703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.031707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.032709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.033716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.034724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.035731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.862 [2024-07-25 04:00:05.036740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:49.862 [2024-07-25 04:00:05.036766] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa614858000 00:16:49.862 [2024-07-25 04:00:05.037889] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:49.862 [2024-07-25 04:00:05.052927] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:49.862 [2024-07-25 04:00:05.052963] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:49.862 [2024-07-25 04:00:05.057862] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:49.862 [2024-07-25 04:00:05.057913] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:49.862 [2024-07-25 04:00:05.058008] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:49.862 [2024-07-25 04:00:05.058035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:49.862 [2024-07-25 04:00:05.058045] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:49.862 [2024-07-25 04:00:05.058855] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:49.862 [2024-07-25 04:00:05.058879] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:49.862 [2024-07-25 04:00:05.058892] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:49.862 [2024-07-25 04:00:05.059859] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:49.862 [2024-07-25 04:00:05.059877] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:49.862 [2024-07-25 04:00:05.059890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.060866] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:49.862 [2024-07-25 04:00:05.060883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.061876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:49.862 [2024-07-25 04:00:05.061895] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:49.862 [2024-07-25 04:00:05.061903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.061914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.062023] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:49.862 [2024-07-25 04:00:05.062031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.062039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:49.862 [2024-07-25 04:00:05.062880] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:49.862 [2024-07-25 04:00:05.063880] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:49.862 [2024-07-25 04:00:05.064891] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:49.862 [2024-07-25 04:00:05.065887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.862 [2024-07-25 04:00:05.065996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:49.862 [2024-07-25 04:00:05.066903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:49.862 [2024-07-25 04:00:05.066920] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:49.862 [2024-07-25 04:00:05.066928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:49.862 [2024-07-25 04:00:05.066951] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:49.862 [2024-07-25 04:00:05.066965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:49.862 [2024-07-25 04:00:05.066988] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.862 [2024-07-25 04:00:05.066997] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.862 [2024-07-25 04:00:05.067004] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.862 [2024-07-25 04:00:05.067022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.862 [2024-07-25 04:00:05.067074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:49.862 [2024-07-25 04:00:05.067089] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:49.862 [2024-07-25 04:00:05.067097] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:49.862 [2024-07-25 04:00:05.067105] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:49.862 [2024-07-25 04:00:05.067113] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:49.862 [2024-07-25 04:00:05.067120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:49.862 [2024-07-25 04:00:05.067128] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:49.862 [2024-07-25 04:00:05.067136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:49.862 [2024-07-25 04:00:05.067148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.863 [2024-07-25 04:00:05.067238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.863 [2024-07-25 04:00:05.067265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.863 [2024-07-25 04:00:05.067279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.863 [2024-07-25 04:00:05.067304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067363] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:49.863 [2024-07-25 04:00:05.067372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067390] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067511] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067540] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:49.863 [2024-07-25 04:00:05.067549] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:49.863 [2024-07-25 04:00:05.067556] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.067566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067616] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:49.863 [2024-07-25 04:00:05.067637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067664] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.863 [2024-07-25 04:00:05.067672] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.863 [2024-07-25 04:00:05.067678] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.067687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067762] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.863 [2024-07-25 04:00:05.067770] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.863 [2024-07-25 04:00:05.067776] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.067786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067835] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067874] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:49.863 [2024-07-25 04:00:05.067882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:49.863 [2024-07-25 04:00:05.067890] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:49.863 [2024-07-25 04:00:05.067916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.067982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.067994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.068022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068047] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:49.863 [2024-07-25 04:00:05.068058] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:49.863 [2024-07-25 04:00:05.068064] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:49.863 [2024-07-25 04:00:05.068070] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:49.863 [2024-07-25 04:00:05.068076] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:49.863 [2024-07-25 04:00:05.068085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:49.863 [2024-07-25 04:00:05.068097] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:49.863 [2024-07-25 04:00:05.068105] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:49.863 [2024-07-25 04:00:05.068111] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.068120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.068131] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:49.863 [2024-07-25 04:00:05.068138] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.863 [2024-07-25 04:00:05.068144] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.068153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.068165] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:49.863 [2024-07-25 04:00:05.068173] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:49.863 [2024-07-25 04:00:05.068179] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.863 [2024-07-25 04:00:05.068188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.068199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:49.863 ===================================================== 00:16:49.863 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:49.863 ===================================================== 00:16:49.863 Controller Capabilities/Features 00:16:49.863 ================================ 00:16:49.863 Vendor ID: 4e58 00:16:49.863 Subsystem Vendor ID: 4e58 00:16:49.863 Serial Number: SPDK1 00:16:49.863 Model Number: SPDK bdev Controller 00:16:49.863 Firmware Version: 24.09 00:16:49.863 Recommended Arb Burst: 6 00:16:49.863 IEEE OUI Identifier: 8d 6b 50 00:16:49.863 Multi-path I/O 00:16:49.863 May have multiple subsystem ports: Yes 00:16:49.863 May have multiple controllers: Yes 00:16:49.863 Associated with SR-IOV VF: No 00:16:49.863 Max Data Transfer Size: 131072 00:16:49.863 Max Number of Namespaces: 32 00:16:49.863 Max Number of I/O Queues: 127 00:16:49.863 NVMe Specification Version (VS): 1.3 00:16:49.863 NVMe Specification Version (Identify): 1.3 00:16:49.863 Maximum Queue Entries: 256 00:16:49.863 Contiguous Queues Required: Yes 00:16:49.863 Arbitration Mechanisms Supported 00:16:49.863 Weighted Round Robin: Not Supported 00:16:49.863 Vendor Specific: Not Supported 00:16:49.863 Reset Timeout: 15000 ms 00:16:49.863 Doorbell Stride: 4 bytes 00:16:49.863 NVM Subsystem Reset: Not Supported 00:16:49.863 Command Sets Supported 00:16:49.863 NVM Command Set: Supported 00:16:49.863 Boot Partition: Not Supported 00:16:49.863 Memory Page Size Minimum: 4096 bytes 00:16:49.863 Memory Page Size Maximum: 4096 bytes 00:16:49.863 Persistent Memory Region: Not Supported 00:16:49.863 Optional Asynchronous Events Supported 00:16:49.863 Namespace Attribute Notices: Supported 00:16:49.863 Firmware Activation Notices: Not Supported 00:16:49.863 ANA Change Notices: Not Supported 00:16:49.863 PLE Aggregate Log Change Notices: Not Supported 00:16:49.863 LBA Status Info Alert Notices: Not Supported 00:16:49.863 EGE Aggregate Log Change Notices: Not Supported 00:16:49.863 Normal NVM Subsystem Shutdown event: Not Supported 00:16:49.863 Zone Descriptor Change Notices: Not Supported 00:16:49.863 Discovery Log Change Notices: Not Supported 00:16:49.863 Controller Attributes 00:16:49.863 128-bit Host Identifier: Supported 00:16:49.863 Non-Operational Permissive Mode: Not Supported 00:16:49.863 NVM Sets: Not Supported 00:16:49.863 Read Recovery Levels: Not Supported 00:16:49.863 Endurance Groups: Not Supported 00:16:49.863 Predictable Latency Mode: Not Supported 00:16:49.863 Traffic Based Keep ALive: Not Supported 00:16:49.863 Namespace Granularity: Not Supported 00:16:49.863 SQ Associations: Not Supported 00:16:49.863 UUID List: Not Supported 00:16:49.863 Multi-Domain Subsystem: Not Supported 00:16:49.863 Fixed Capacity Management: Not Supported 00:16:49.863 Variable Capacity Management: Not Supported 00:16:49.863 Delete Endurance Group: Not Supported 00:16:49.863 Delete NVM Set: Not Supported 00:16:49.863 Extended LBA Formats Supported: Not Supported 00:16:49.863 Flexible Data Placement Supported: Not Supported 00:16:49.863 00:16:49.863 Controller Memory Buffer Support 00:16:49.863 ================================ 00:16:49.863 Supported: No 00:16:49.863 00:16:49.863 Persistent Memory Region Support 00:16:49.863 ================================ 00:16:49.863 Supported: No 00:16:49.863 00:16:49.863 Admin Command Set Attributes 00:16:49.863 ============================ 00:16:49.863 Security Send/Receive: Not Supported 00:16:49.863 Format NVM: Not Supported 00:16:49.863 Firmware Activate/Download: Not Supported 00:16:49.863 Namespace Management: Not Supported 00:16:49.863 Device Self-Test: Not Supported 00:16:49.863 Directives: Not Supported 00:16:49.863 NVMe-MI: Not Supported 00:16:49.863 Virtualization Management: Not Supported 00:16:49.863 Doorbell Buffer Config: Not Supported 00:16:49.863 Get LBA Status Capability: Not Supported 00:16:49.863 Command & Feature Lockdown Capability: Not Supported 00:16:49.863 Abort Command Limit: 4 00:16:49.863 Async Event Request Limit: 4 00:16:49.863 Number of Firmware Slots: N/A 00:16:49.863 Firmware Slot 1 Read-Only: N/A 00:16:49.863 Firmware Activation Without Reset: N/A 00:16:49.863 Multiple Update Detection Support: N/A 00:16:49.863 Firmware Update Granularity: No Information Provided 00:16:49.863 Per-Namespace SMART Log: No 00:16:49.863 Asymmetric Namespace Access Log Page: Not Supported 00:16:49.863 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:49.863 Command Effects Log Page: Supported 00:16:49.863 Get Log Page Extended Data: Supported 00:16:49.863 Telemetry Log Pages: Not Supported 00:16:49.863 Persistent Event Log Pages: Not Supported 00:16:49.863 Supported Log Pages Log Page: May Support 00:16:49.863 Commands Supported & Effects Log Page: Not Supported 00:16:49.863 Feature Identifiers & Effects Log Page:May Support 00:16:49.863 NVMe-MI Commands & Effects Log Page: May Support 00:16:49.863 Data Area 4 for Telemetry Log: Not Supported 00:16:49.863 Error Log Page Entries Supported: 128 00:16:49.863 Keep Alive: Supported 00:16:49.863 Keep Alive Granularity: 10000 ms 00:16:49.863 00:16:49.863 NVM Command Set Attributes 00:16:49.863 ========================== 00:16:49.863 Submission Queue Entry Size 00:16:49.863 Max: 64 00:16:49.863 Min: 64 00:16:49.863 Completion Queue Entry Size 00:16:49.863 Max: 16 00:16:49.863 Min: 16 00:16:49.863 Number of Namespaces: 32 00:16:49.863 Compare Command: Supported 00:16:49.863 Write Uncorrectable Command: Not Supported 00:16:49.863 Dataset Management Command: Supported 00:16:49.863 Write Zeroes Command: Supported 00:16:49.863 Set Features Save Field: Not Supported 00:16:49.863 Reservations: Not Supported 00:16:49.863 Timestamp: Not Supported 00:16:49.863 Copy: Supported 00:16:49.863 Volatile Write Cache: Present 00:16:49.863 Atomic Write Unit (Normal): 1 00:16:49.863 Atomic Write Unit (PFail): 1 00:16:49.863 Atomic Compare & Write Unit: 1 00:16:49.863 Fused Compare & Write: Supported 00:16:49.863 Scatter-Gather List 00:16:49.863 SGL Command Set: Supported (Dword aligned) 00:16:49.863 SGL Keyed: Not Supported 00:16:49.863 SGL Bit Bucket Descriptor: Not Supported 00:16:49.863 SGL Metadata Pointer: Not Supported 00:16:49.863 Oversized SGL: Not Supported 00:16:49.863 SGL Metadata Address: Not Supported 00:16:49.863 SGL Offset: Not Supported 00:16:49.863 Transport SGL Data Block: Not Supported 00:16:49.863 Replay Protected Memory Block: Not Supported 00:16:49.863 00:16:49.863 Firmware Slot Information 00:16:49.863 ========================= 00:16:49.863 Active slot: 1 00:16:49.863 Slot 1 Firmware Revision: 24.09 00:16:49.863 00:16:49.863 00:16:49.863 Commands Supported and Effects 00:16:49.863 ============================== 00:16:49.863 Admin Commands 00:16:49.863 -------------- 00:16:49.863 Get Log Page (02h): Supported 00:16:49.863 Identify (06h): Supported 00:16:49.863 Abort (08h): Supported 00:16:49.863 Set Features (09h): Supported 00:16:49.863 Get Features (0Ah): Supported 00:16:49.863 Asynchronous Event Request (0Ch): Supported 00:16:49.863 Keep Alive (18h): Supported 00:16:49.863 I/O Commands 00:16:49.863 ------------ 00:16:49.863 Flush (00h): Supported LBA-Change 00:16:49.863 Write (01h): Supported LBA-Change 00:16:49.863 Read (02h): Supported 00:16:49.863 Compare (05h): Supported 00:16:49.863 Write Zeroes (08h): Supported LBA-Change 00:16:49.863 Dataset Management (09h): Supported LBA-Change 00:16:49.863 Copy (19h): Supported LBA-Change 00:16:49.863 00:16:49.863 Error Log 00:16:49.863 ========= 00:16:49.863 00:16:49.863 Arbitration 00:16:49.863 =========== 00:16:49.863 Arbitration Burst: 1 00:16:49.863 00:16:49.863 Power Management 00:16:49.863 ================ 00:16:49.863 Number of Power States: 1 00:16:49.863 Current Power State: Power State #0 00:16:49.863 Power State #0: 00:16:49.863 Max Power: 0.00 W 00:16:49.863 Non-Operational State: Operational 00:16:49.863 Entry Latency: Not Reported 00:16:49.863 Exit Latency: Not Reported 00:16:49.863 Relative Read Throughput: 0 00:16:49.863 Relative Read Latency: 0 00:16:49.863 Relative Write Throughput: 0 00:16:49.863 Relative Write Latency: 0 00:16:49.863 Idle Power: Not Reported 00:16:49.863 Active Power: Not Reported 00:16:49.863 Non-Operational Permissive Mode: Not Supported 00:16:49.863 00:16:49.863 Health Information 00:16:49.863 ================== 00:16:49.863 Critical Warnings: 00:16:49.863 Available Spare Space: OK 00:16:49.863 Temperature: OK 00:16:49.863 Device Reliability: OK 00:16:49.863 Read Only: No 00:16:49.863 Volatile Memory Backup: OK 00:16:49.863 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:49.863 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:49.863 Available Spare: 0% 00:16:49.863 Available Sp[2024-07-25 04:00:05.068415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:49.863 [2024-07-25 04:00:05.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068475] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:49.863 [2024-07-25 04:00:05.068493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.068545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.863 [2024-07-25 04:00:05.072253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:49.863 [2024-07-25 04:00:05.072274] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:49.863 [2024-07-25 04:00:05.072929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.863 [2024-07-25 04:00:05.073012] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:49.863 [2024-07-25 04:00:05.073026] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:49.863 [2024-07-25 04:00:05.073936] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:49.863 [2024-07-25 04:00:05.073959] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:49.863 [2024-07-25 04:00:05.074011] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:49.863 [2024-07-25 04:00:05.075983] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:49.863 are Threshold: 0% 00:16:49.863 Life Percentage Used: 0% 00:16:49.863 Data Units Read: 0 00:16:49.863 Data Units Written: 0 00:16:49.863 Host Read Commands: 0 00:16:49.863 Host Write Commands: 0 00:16:49.864 Controller Busy Time: 0 minutes 00:16:49.864 Power Cycles: 0 00:16:49.864 Power On Hours: 0 hours 00:16:49.864 Unsafe Shutdowns: 0 00:16:49.864 Unrecoverable Media Errors: 0 00:16:49.864 Lifetime Error Log Entries: 0 00:16:49.864 Warning Temperature Time: 0 minutes 00:16:49.864 Critical Temperature Time: 0 minutes 00:16:49.864 00:16:49.864 Number of Queues 00:16:49.864 ================ 00:16:49.864 Number of I/O Submission Queues: 127 00:16:49.864 Number of I/O Completion Queues: 127 00:16:49.864 00:16:49.864 Active Namespaces 00:16:49.864 ================= 00:16:49.864 Namespace ID:1 00:16:49.864 Error Recovery Timeout: Unlimited 00:16:49.864 Command Set Identifier: NVM (00h) 00:16:49.864 Deallocate: Supported 00:16:49.864 Deallocated/Unwritten Error: Not Supported 00:16:49.864 Deallocated Read Value: Unknown 00:16:49.864 Deallocate in Write Zeroes: Not Supported 00:16:49.864 Deallocated Guard Field: 0xFFFF 00:16:49.864 Flush: Supported 00:16:49.864 Reservation: Supported 00:16:49.864 Namespace Sharing Capabilities: Multiple Controllers 00:16:49.864 Size (in LBAs): 131072 (0GiB) 00:16:49.864 Capacity (in LBAs): 131072 (0GiB) 00:16:49.864 Utilization (in LBAs): 131072 (0GiB) 00:16:49.864 NGUID: 8C91EA21333B4132881D368D9A09D1E9 00:16:49.864 UUID: 8c91ea21-333b-4132-881d-368d9a09d1e9 00:16:49.864 Thin Provisioning: Not Supported 00:16:49.864 Per-NS Atomic Units: Yes 00:16:49.864 Atomic Boundary Size (Normal): 0 00:16:49.864 Atomic Boundary Size (PFail): 0 00:16:49.864 Atomic Boundary Offset: 0 00:16:49.864 Maximum Single Source Range Length: 65535 00:16:49.864 Maximum Copy Length: 65535 00:16:49.864 Maximum Source Range Count: 1 00:16:49.864 NGUID/EUI64 Never Reused: No 00:16:49.864 Namespace Write Protected: No 00:16:49.864 Number of LBA Formats: 1 00:16:49.864 Current LBA Format: LBA Format #00 00:16:49.864 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:49.864 00:16:49.864 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:49.864 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.119 [2024-07-25 04:00:05.308098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:55.373 Initializing NVMe Controllers 00:16:55.373 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:55.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:55.373 Initialization complete. Launching workers. 00:16:55.373 ======================================================== 00:16:55.373 Latency(us) 00:16:55.373 Device Information : IOPS MiB/s Average min max 00:16:55.373 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34642.85 135.32 3694.31 1155.86 8578.80 00:16:55.373 ======================================================== 00:16:55.373 Total : 34642.85 135.32 3694.31 1155.86 8578.80 00:16:55.373 00:16:55.373 [2024-07-25 04:00:10.331318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:55.373 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:55.373 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.373 [2024-07-25 04:00:10.565478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:00.639 Initializing NVMe Controllers 00:17:00.639 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:00.639 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:00.639 Initialization complete. Launching workers. 00:17:00.639 ======================================================== 00:17:00.639 Latency(us) 00:17:00.639 Device Information : IOPS MiB/s Average min max 00:17:00.639 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.99 62.50 8010.00 7522.07 11980.27 00:17:00.639 ======================================================== 00:17:00.639 Total : 15999.99 62.50 8010.00 7522.07 11980.27 00:17:00.639 00:17:00.639 [2024-07-25 04:00:15.604713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:00.639 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:00.639 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.639 [2024-07-25 04:00:15.821784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:05.906 [2024-07-25 04:00:20.882556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:05.906 Initializing NVMe Controllers 00:17:05.906 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:05.906 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:05.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:05.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:05.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:05.906 Initialization complete. Launching workers. 00:17:05.906 Starting thread on core 2 00:17:05.906 Starting thread on core 3 00:17:05.906 Starting thread on core 1 00:17:05.906 04:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:05.906 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.906 [2024-07-25 04:00:21.194707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.183 [2024-07-25 04:00:24.260462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.183 Initializing NVMe Controllers 00:17:09.183 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.183 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.183 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:09.183 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:09.183 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:09.183 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:09.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:09.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:09.183 Initialization complete. Launching workers. 00:17:09.183 Starting thread on core 1 with urgent priority queue 00:17:09.183 Starting thread on core 2 with urgent priority queue 00:17:09.183 Starting thread on core 3 with urgent priority queue 00:17:09.183 Starting thread on core 0 with urgent priority queue 00:17:09.183 SPDK bdev Controller (SPDK1 ) core 0: 5874.33 IO/s 17.02 secs/100000 ios 00:17:09.183 SPDK bdev Controller (SPDK1 ) core 1: 5878.67 IO/s 17.01 secs/100000 ios 00:17:09.183 SPDK bdev Controller (SPDK1 ) core 2: 5437.67 IO/s 18.39 secs/100000 ios 00:17:09.183 SPDK bdev Controller (SPDK1 ) core 3: 5775.00 IO/s 17.32 secs/100000 ios 00:17:09.183 ======================================================== 00:17:09.183 00:17:09.183 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:09.183 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.443 [2024-07-25 04:00:24.548729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.443 Initializing NVMe Controllers 00:17:09.443 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.443 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.443 Namespace ID: 1 size: 0GB 00:17:09.443 Initialization complete. 00:17:09.443 INFO: using host memory buffer for IO 00:17:09.443 Hello world! 00:17:09.443 [2024-07-25 04:00:24.584322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.443 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:09.443 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.702 [2024-07-25 04:00:24.859311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:10.632 Initializing NVMe Controllers 00:17:10.632 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:10.632 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:10.632 Initialization complete. Launching workers. 00:17:10.632 submit (in ns) avg, min, max = 7114.7, 3551.1, 4997041.1 00:17:10.632 complete (in ns) avg, min, max = 24661.1, 2063.3, 7990471.1 00:17:10.632 00:17:10.632 Submit histogram 00:17:10.632 ================ 00:17:10.632 Range in us Cumulative Count 00:17:10.632 3.532 - 3.556: 0.0151% ( 2) 00:17:10.632 3.556 - 3.579: 0.4440% ( 57) 00:17:10.632 3.579 - 3.603: 1.3019% ( 114) 00:17:10.632 3.603 - 3.627: 4.4627% ( 420) 00:17:10.632 3.627 - 3.650: 9.5575% ( 677) 00:17:10.632 3.650 - 3.674: 18.2420% ( 1154) 00:17:10.632 3.674 - 3.698: 26.6933% ( 1123) 00:17:10.632 3.698 - 3.721: 36.3260% ( 1280) 00:17:10.632 3.721 - 3.745: 43.9570% ( 1014) 00:17:10.632 3.745 - 3.769: 49.9323% ( 794) 00:17:10.632 3.769 - 3.793: 54.6809% ( 631) 00:17:10.632 3.793 - 3.816: 58.6318% ( 525) 00:17:10.632 3.816 - 3.840: 62.0936% ( 460) 00:17:10.632 3.840 - 3.864: 65.5855% ( 464) 00:17:10.632 3.864 - 3.887: 69.4988% ( 520) 00:17:10.632 3.887 - 3.911: 73.5099% ( 533) 00:17:10.632 3.911 - 3.935: 77.7845% ( 568) 00:17:10.632 3.935 - 3.959: 81.6376% ( 512) 00:17:10.632 3.959 - 3.982: 84.4446% ( 373) 00:17:10.632 3.982 - 4.006: 86.6571% ( 294) 00:17:10.632 4.006 - 4.030: 88.3353% ( 223) 00:17:10.632 4.030 - 4.053: 89.8254% ( 198) 00:17:10.632 4.053 - 4.077: 90.9843% ( 154) 00:17:10.632 4.077 - 4.101: 91.9250% ( 125) 00:17:10.632 4.101 - 4.124: 92.8507% ( 123) 00:17:10.632 4.124 - 4.148: 93.6936% ( 112) 00:17:10.632 4.148 - 4.172: 94.3182% ( 83) 00:17:10.632 4.172 - 4.196: 94.9353% ( 82) 00:17:10.632 4.196 - 4.219: 95.3266% ( 52) 00:17:10.632 4.219 - 4.243: 95.6502% ( 43) 00:17:10.632 4.243 - 4.267: 95.9588% ( 41) 00:17:10.632 4.267 - 4.290: 96.0867% ( 17) 00:17:10.632 4.290 - 4.314: 96.2071% ( 16) 00:17:10.632 4.314 - 4.338: 96.3576% ( 20) 00:17:10.632 4.338 - 4.361: 96.5006% ( 19) 00:17:10.632 4.361 - 4.385: 96.6361% ( 18) 00:17:10.632 4.385 - 4.409: 96.7414% ( 14) 00:17:10.632 4.409 - 4.433: 96.8092% ( 9) 00:17:10.632 4.433 - 4.456: 96.8468% ( 5) 00:17:10.632 4.456 - 4.480: 96.8769% ( 4) 00:17:10.632 4.480 - 4.504: 96.9296% ( 7) 00:17:10.632 4.504 - 4.527: 96.9822% ( 7) 00:17:10.632 4.527 - 4.551: 97.0123% ( 4) 00:17:10.632 4.551 - 4.575: 97.0349% ( 3) 00:17:10.632 4.575 - 4.599: 97.0575% ( 3) 00:17:10.632 4.599 - 4.622: 97.0650% ( 1) 00:17:10.632 4.622 - 4.646: 97.0801% ( 2) 00:17:10.632 4.646 - 4.670: 97.1026% ( 3) 00:17:10.632 4.693 - 4.717: 97.1252% ( 3) 00:17:10.632 4.717 - 4.741: 97.1403% ( 2) 00:17:10.632 4.741 - 4.764: 97.1854% ( 6) 00:17:10.632 4.764 - 4.788: 97.2306% ( 6) 00:17:10.632 4.788 - 4.812: 97.2682% ( 5) 00:17:10.632 4.812 - 4.836: 97.3134% ( 6) 00:17:10.632 4.836 - 4.859: 97.3435% ( 4) 00:17:10.632 4.859 - 4.883: 97.3811% ( 5) 00:17:10.633 4.883 - 4.907: 97.4413% ( 8) 00:17:10.633 4.907 - 4.930: 97.5467% ( 14) 00:17:10.633 4.930 - 4.954: 97.5692% ( 3) 00:17:10.633 4.954 - 4.978: 97.6219% ( 7) 00:17:10.633 4.978 - 5.001: 97.7047% ( 11) 00:17:10.633 5.001 - 5.025: 97.7348% ( 4) 00:17:10.633 5.025 - 5.049: 97.7423% ( 1) 00:17:10.633 5.049 - 5.073: 97.8025% ( 8) 00:17:10.633 5.073 - 5.096: 97.8326% ( 4) 00:17:10.633 5.096 - 5.120: 97.8552% ( 3) 00:17:10.633 5.120 - 5.144: 97.8703% ( 2) 00:17:10.633 5.144 - 5.167: 97.9154% ( 6) 00:17:10.633 5.167 - 5.191: 97.9305% ( 2) 00:17:10.633 5.191 - 5.215: 97.9455% ( 2) 00:17:10.633 5.215 - 5.239: 97.9681% ( 3) 00:17:10.633 5.239 - 5.262: 97.9907% ( 3) 00:17:10.633 5.262 - 5.286: 97.9982% ( 1) 00:17:10.633 5.286 - 5.310: 98.0057% ( 1) 00:17:10.633 5.310 - 5.333: 98.0132% ( 1) 00:17:10.633 5.333 - 5.357: 98.0283% ( 2) 00:17:10.633 5.357 - 5.381: 98.0358% ( 1) 00:17:10.633 5.381 - 5.404: 98.0433% ( 1) 00:17:10.633 5.428 - 5.452: 98.0509% ( 1) 00:17:10.633 5.476 - 5.499: 98.0584% ( 1) 00:17:10.633 5.499 - 5.523: 98.0659% ( 1) 00:17:10.633 5.547 - 5.570: 98.0734% ( 1) 00:17:10.633 5.570 - 5.594: 98.0810% ( 1) 00:17:10.633 5.594 - 5.618: 98.0885% ( 1) 00:17:10.633 5.618 - 5.641: 98.0960% ( 1) 00:17:10.633 5.689 - 5.713: 98.1036% ( 1) 00:17:10.633 5.713 - 5.736: 98.1111% ( 1) 00:17:10.633 5.736 - 5.760: 98.1186% ( 1) 00:17:10.633 5.784 - 5.807: 98.1261% ( 1) 00:17:10.633 5.807 - 5.831: 98.1337% ( 1) 00:17:10.633 5.855 - 5.879: 98.1412% ( 1) 00:17:10.633 5.902 - 5.926: 98.1487% ( 1) 00:17:10.633 5.997 - 6.021: 98.1562% ( 1) 00:17:10.633 6.068 - 6.116: 98.1713% ( 2) 00:17:10.633 6.163 - 6.210: 98.1788% ( 1) 00:17:10.633 6.542 - 6.590: 98.1863% ( 1) 00:17:10.633 6.590 - 6.637: 98.1939% ( 1) 00:17:10.633 6.827 - 6.874: 98.2014% ( 1) 00:17:10.633 6.969 - 7.016: 98.2164% ( 2) 00:17:10.633 7.016 - 7.064: 98.2240% ( 1) 00:17:10.633 7.301 - 7.348: 98.2315% ( 1) 00:17:10.633 7.348 - 7.396: 98.2390% ( 1) 00:17:10.633 7.396 - 7.443: 98.2541% ( 2) 00:17:10.633 7.490 - 7.538: 98.2616% ( 1) 00:17:10.633 7.538 - 7.585: 98.2766% ( 2) 00:17:10.633 7.585 - 7.633: 98.2842% ( 1) 00:17:10.633 7.727 - 7.775: 98.2917% ( 1) 00:17:10.633 7.775 - 7.822: 98.2992% ( 1) 00:17:10.633 7.822 - 7.870: 98.3067% ( 1) 00:17:10.633 8.059 - 8.107: 98.3218% ( 2) 00:17:10.633 8.107 - 8.154: 98.3368% ( 2) 00:17:10.633 8.201 - 8.249: 98.3444% ( 1) 00:17:10.633 8.249 - 8.296: 98.3519% ( 1) 00:17:10.633 8.296 - 8.344: 98.3594% ( 1) 00:17:10.633 8.391 - 8.439: 98.3669% ( 1) 00:17:10.633 8.439 - 8.486: 98.3895% ( 3) 00:17:10.633 8.486 - 8.533: 98.3970% ( 1) 00:17:10.633 8.533 - 8.581: 98.4046% ( 1) 00:17:10.633 8.723 - 8.770: 98.4196% ( 2) 00:17:10.633 8.770 - 8.818: 98.4272% ( 1) 00:17:10.633 8.818 - 8.865: 98.4497% ( 3) 00:17:10.633 9.007 - 9.055: 98.4648% ( 2) 00:17:10.633 9.055 - 9.102: 98.4723% ( 1) 00:17:10.633 9.150 - 9.197: 98.4798% ( 1) 00:17:10.633 9.197 - 9.244: 98.4949% ( 2) 00:17:10.633 9.387 - 9.434: 98.5024% ( 1) 00:17:10.633 9.576 - 9.624: 98.5175% ( 2) 00:17:10.633 9.766 - 9.813: 98.5250% ( 1) 00:17:10.633 9.908 - 9.956: 98.5325% ( 1) 00:17:10.633 10.050 - 10.098: 98.5400% ( 1) 00:17:10.633 10.145 - 10.193: 98.5476% ( 1) 00:17:10.633 10.430 - 10.477: 98.5551% ( 1) 00:17:10.633 10.572 - 10.619: 98.5626% ( 1) 00:17:10.633 10.667 - 10.714: 98.5701% ( 1) 00:17:10.633 10.714 - 10.761: 98.5777% ( 1) 00:17:10.633 10.761 - 10.809: 98.5852% ( 1) 00:17:10.633 10.809 - 10.856: 98.6002% ( 2) 00:17:10.633 11.093 - 11.141: 98.6078% ( 1) 00:17:10.633 11.283 - 11.330: 98.6228% ( 2) 00:17:10.633 11.330 - 11.378: 98.6303% ( 1) 00:17:10.633 11.567 - 11.615: 98.6379% ( 1) 00:17:10.633 11.615 - 11.662: 98.6454% ( 1) 00:17:10.633 11.852 - 11.899: 98.6529% ( 1) 00:17:10.633 11.899 - 11.947: 98.6604% ( 1) 00:17:10.633 11.947 - 11.994: 98.6680% ( 1) 00:17:10.633 12.136 - 12.231: 98.6755% ( 1) 00:17:10.633 12.231 - 12.326: 98.6830% ( 1) 00:17:10.633 12.610 - 12.705: 98.6981% ( 2) 00:17:10.633 12.705 - 12.800: 98.7056% ( 1) 00:17:10.633 12.800 - 12.895: 98.7207% ( 2) 00:17:10.633 13.464 - 13.559: 98.7282% ( 1) 00:17:10.633 13.559 - 13.653: 98.7357% ( 1) 00:17:10.633 13.653 - 13.748: 98.7432% ( 1) 00:17:10.633 13.938 - 14.033: 98.7508% ( 1) 00:17:10.633 14.696 - 14.791: 98.7583% ( 1) 00:17:10.633 15.170 - 15.265: 98.7658% ( 1) 00:17:10.633 15.739 - 15.834: 98.7733% ( 1) 00:17:10.633 16.972 - 17.067: 98.7884% ( 2) 00:17:10.633 17.161 - 17.256: 98.8034% ( 2) 00:17:10.633 17.351 - 17.446: 98.8110% ( 1) 00:17:10.633 17.446 - 17.541: 98.8486% ( 5) 00:17:10.633 17.541 - 17.636: 98.9088% ( 8) 00:17:10.633 17.636 - 17.730: 98.9314% ( 3) 00:17:10.633 17.730 - 17.825: 98.9840% ( 7) 00:17:10.633 17.825 - 17.920: 99.0141% ( 4) 00:17:10.633 17.920 - 18.015: 99.0668% ( 7) 00:17:10.633 18.015 - 18.110: 99.1270% ( 8) 00:17:10.633 18.110 - 18.204: 99.2324% ( 14) 00:17:10.633 18.204 - 18.299: 99.3152% ( 11) 00:17:10.633 18.299 - 18.394: 99.3980% ( 11) 00:17:10.633 18.394 - 18.489: 99.4356% ( 5) 00:17:10.633 18.489 - 18.584: 99.5635% ( 17) 00:17:10.633 18.584 - 18.679: 99.5936% ( 4) 00:17:10.633 18.679 - 18.773: 99.6312% ( 5) 00:17:10.633 18.773 - 18.868: 99.6463% ( 2) 00:17:10.633 18.868 - 18.963: 99.7216% ( 10) 00:17:10.633 18.963 - 19.058: 99.7441% ( 3) 00:17:10.633 19.058 - 19.153: 99.7818% ( 5) 00:17:10.633 19.153 - 19.247: 99.8194% ( 5) 00:17:10.633 19.247 - 19.342: 99.8344% ( 2) 00:17:10.633 19.627 - 19.721: 99.8420% ( 1) 00:17:10.633 19.721 - 19.816: 99.8495% ( 1) 00:17:10.633 20.196 - 20.290: 99.8570% ( 1) 00:17:10.633 21.523 - 21.618: 99.8645% ( 1) 00:17:10.633 22.376 - 22.471: 99.8721% ( 1) 00:17:10.633 22.756 - 22.850: 99.8871% ( 2) 00:17:10.633 22.945 - 23.040: 99.8946% ( 1) 00:17:10.633 23.230 - 23.324: 99.9022% ( 1) 00:17:10.633 23.609 - 23.704: 99.9097% ( 1) 00:17:10.633 23.988 - 24.083: 99.9172% ( 1) 00:17:10.633 26.169 - 26.359: 99.9247% ( 1) 00:17:10.633 3980.705 - 4004.978: 99.9699% ( 6) 00:17:10.633 4004.978 - 4029.250: 99.9925% ( 3) 00:17:10.633 4975.881 - 5000.154: 100.0000% ( 1) 00:17:10.633 00:17:10.633 Complete histogram 00:17:10.633 ================== 00:17:10.633 Range in us Cumulative Count 00:17:10.633 2.062 - 2.074: 3.3489% ( 445) 00:17:10.633 2.074 - 2.086: 39.6598% ( 4825) 00:17:10.633 2.086 - 2.098: 48.6905% ( 1200) 00:17:10.633 2.098 - 2.110: 51.8663% ( 422) 00:17:10.633 2.110 - 2.121: 59.4973% ( 1014) 00:17:10.633 2.121 - 2.133: 61.4088% ( 254) 00:17:10.633 2.133 - 2.145: 65.9693% ( 606) 00:17:10.633 2.145 - 2.157: 74.5861% ( 1145) 00:17:10.633 2.157 - 2.169: 76.2643% ( 223) 00:17:10.633 2.169 - 2.181: 78.3489% ( 277) 00:17:10.633 2.181 - 2.193: 81.2763% ( 389) 00:17:10.633 2.193 - 2.204: 82.0063% ( 97) 00:17:10.633 2.204 - 2.216: 83.5942% ( 211) 00:17:10.633 2.216 - 2.228: 88.5611% ( 660) 00:17:10.633 2.228 - 2.240: 90.6081% ( 272) 00:17:10.633 2.240 - 2.252: 91.7821% ( 156) 00:17:10.633 2.252 - 2.264: 92.8356% ( 140) 00:17:10.633 2.264 - 2.276: 93.2044% ( 49) 00:17:10.633 2.276 - 2.287: 93.5957% ( 52) 00:17:10.633 2.287 - 2.299: 94.2806% ( 91) 00:17:10.633 2.299 - 2.311: 94.8374% ( 74) 00:17:10.633 2.311 - 2.323: 95.0933% ( 34) 00:17:10.633 2.323 - 2.335: 95.1610% ( 9) 00:17:10.633 2.335 - 2.347: 95.1911% ( 4) 00:17:10.633 2.347 - 2.359: 95.2514% ( 8) 00:17:10.633 2.359 - 2.370: 95.4094% ( 21) 00:17:10.633 2.370 - 2.382: 95.6352% ( 30) 00:17:10.633 2.382 - 2.394: 95.8760% ( 32) 00:17:10.633 2.394 - 2.406: 96.1469% ( 36) 00:17:10.633 2.406 - 2.418: 96.3501% ( 27) 00:17:10.633 2.418 - 2.430: 96.6586% ( 41) 00:17:10.633 2.430 - 2.441: 96.9446% ( 38) 00:17:10.633 2.441 - 2.453: 97.0650% ( 16) 00:17:10.633 2.453 - 2.465: 97.2456% ( 24) 00:17:10.633 2.465 - 2.477: 97.4037% ( 21) 00:17:10.633 2.477 - 2.489: 97.5391% ( 18) 00:17:10.633 2.489 - 2.501: 97.7498% ( 28) 00:17:10.633 2.501 - 2.513: 97.8853% ( 18) 00:17:10.633 2.513 - 2.524: 97.9606% ( 10) 00:17:10.633 2.524 - 2.536: 97.9756% ( 2) 00:17:10.633 2.536 - 2.548: 98.0433% ( 9) 00:17:10.634 2.548 - 2.560: 98.1186% ( 10) 00:17:10.634 2.560 - 2.572: 98.1638% ( 6) 00:17:10.634 2.572 - 2.584: 98.1788% ( 2) 00:17:10.634 2.584 - 2.596: 98.1863% ( 1) 00:17:10.634 2.596 - 2.607: 98.2014% ( 2) 00:17:10.634 2.607 - 2.619: 98.2240% ( 3) 00:17:10.634 2.619 - 2.631: 98.2541% ( 4) 00:17:10.634 2.631 - 2.643: 98.2766% ( 3) 00:17:10.634 2.643 - 2.655: 98.2842% ( 1) 00:17:10.634 2.655 - 2.667: 98.2917% ( 1) 00:17:10.634 2.667 - 2.679: 98.2992% ( 1) 00:17:10.634 2.702 - 2.714: 98.3143% ( 2) 00:17:10.634 2.761 - 2.773: 98.3218% ( 1) 00:17:10.634 2.785 - 2.797: 98.3368% ( 2) 00:17:10.634 2.797 - 2.809: 98.3444% ( 1) 00:17:10.634 2.809 - 2.821: 98.3669% ( 3) 00:17:10.634 3.034 - 3.058: 9[2024-07-25 04:00:25.881404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:10.634 8.3745% ( 1) 00:17:10.634 3.247 - 3.271: 98.3820% ( 1) 00:17:10.634 3.271 - 3.295: 98.4272% ( 6) 00:17:10.634 3.295 - 3.319: 98.4422% ( 2) 00:17:10.634 3.319 - 3.342: 98.4497% ( 1) 00:17:10.634 3.390 - 3.413: 98.4723% ( 3) 00:17:10.634 3.437 - 3.461: 98.4874% ( 2) 00:17:10.634 3.461 - 3.484: 98.4949% ( 1) 00:17:10.634 3.508 - 3.532: 98.5175% ( 3) 00:17:10.634 3.556 - 3.579: 98.5250% ( 1) 00:17:10.634 3.579 - 3.603: 98.5325% ( 1) 00:17:10.634 3.603 - 3.627: 98.5400% ( 1) 00:17:10.634 3.627 - 3.650: 98.5476% ( 1) 00:17:10.634 3.674 - 3.698: 98.5551% ( 1) 00:17:10.634 3.721 - 3.745: 98.5701% ( 2) 00:17:10.634 3.745 - 3.769: 98.5852% ( 2) 00:17:10.634 3.769 - 3.793: 98.5927% ( 1) 00:17:10.634 3.793 - 3.816: 98.6002% ( 1) 00:17:10.634 3.840 - 3.864: 98.6228% ( 3) 00:17:10.634 3.864 - 3.887: 98.6303% ( 1) 00:17:10.634 3.911 - 3.935: 98.6379% ( 1) 00:17:10.634 3.959 - 3.982: 98.6454% ( 1) 00:17:10.634 4.124 - 4.148: 98.6604% ( 2) 00:17:10.634 4.172 - 4.196: 98.6680% ( 1) 00:17:10.634 5.239 - 5.262: 98.6755% ( 1) 00:17:10.634 5.333 - 5.357: 98.6905% ( 2) 00:17:10.634 5.736 - 5.760: 98.6981% ( 1) 00:17:10.634 5.926 - 5.950: 98.7056% ( 1) 00:17:10.634 5.973 - 5.997: 98.7131% ( 1) 00:17:10.634 6.258 - 6.305: 98.7207% ( 1) 00:17:10.634 6.637 - 6.684: 98.7282% ( 1) 00:17:10.634 6.732 - 6.779: 98.7432% ( 2) 00:17:10.634 6.779 - 6.827: 98.7583% ( 2) 00:17:10.634 6.827 - 6.874: 98.7809% ( 3) 00:17:10.634 6.874 - 6.921: 98.7884% ( 1) 00:17:10.634 7.016 - 7.064: 98.7959% ( 1) 00:17:10.634 7.111 - 7.159: 98.8034% ( 1) 00:17:10.634 7.443 - 7.490: 98.8110% ( 1) 00:17:10.634 7.538 - 7.585: 98.8185% ( 1) 00:17:10.634 8.201 - 8.249: 98.8260% ( 1) 00:17:10.634 8.296 - 8.344: 98.8335% ( 1) 00:17:10.634 8.439 - 8.486: 98.8411% ( 1) 00:17:10.634 8.913 - 8.960: 98.8486% ( 1) 00:17:10.634 10.335 - 10.382: 98.8561% ( 1) 00:17:10.634 15.360 - 15.455: 98.8636% ( 1) 00:17:10.634 15.644 - 15.739: 98.9088% ( 6) 00:17:10.634 15.739 - 15.834: 98.9238% ( 2) 00:17:10.634 15.929 - 16.024: 98.9615% ( 5) 00:17:10.634 16.024 - 16.119: 98.9765% ( 2) 00:17:10.634 16.119 - 16.213: 98.9991% ( 3) 00:17:10.634 16.213 - 16.308: 99.0217% ( 3) 00:17:10.634 16.308 - 16.403: 99.0518% ( 4) 00:17:10.634 16.403 - 16.498: 99.1270% ( 10) 00:17:10.634 16.498 - 16.593: 99.2023% ( 10) 00:17:10.634 16.593 - 16.687: 99.2474% ( 6) 00:17:10.634 16.687 - 16.782: 99.2851% ( 5) 00:17:10.634 16.782 - 16.877: 99.3227% ( 5) 00:17:10.634 16.877 - 16.972: 99.3377% ( 2) 00:17:10.634 16.972 - 17.067: 99.3528% ( 2) 00:17:10.634 17.067 - 17.161: 99.3679% ( 2) 00:17:10.634 17.161 - 17.256: 99.3829% ( 2) 00:17:10.634 17.256 - 17.351: 99.3904% ( 1) 00:17:10.634 17.351 - 17.446: 99.3980% ( 1) 00:17:10.634 17.446 - 17.541: 99.4055% ( 1) 00:17:10.634 17.541 - 17.636: 99.4130% ( 1) 00:17:10.634 17.730 - 17.825: 99.4205% ( 1) 00:17:10.634 17.825 - 17.920: 99.4281% ( 1) 00:17:10.634 18.015 - 18.110: 99.4356% ( 1) 00:17:10.634 18.110 - 18.204: 99.4431% ( 1) 00:17:10.634 21.049 - 21.144: 99.4506% ( 1) 00:17:10.634 3980.705 - 4004.978: 99.8796% ( 57) 00:17:10.634 4004.978 - 4029.250: 99.9774% ( 13) 00:17:10.634 4975.881 - 5000.154: 99.9925% ( 2) 00:17:10.634 7961.410 - 8009.956: 100.0000% ( 1) 00:17:10.634 00:17:10.891 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:10.891 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:10.891 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:10.891 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:10.892 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:10.892 [ 00:17:10.892 { 00:17:10.892 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:10.892 "subtype": "Discovery", 00:17:10.892 "listen_addresses": [], 00:17:10.892 "allow_any_host": true, 00:17:10.892 "hosts": [] 00:17:10.892 }, 00:17:10.892 { 00:17:10.892 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:10.892 "subtype": "NVMe", 00:17:10.892 "listen_addresses": [ 00:17:10.892 { 00:17:10.892 "trtype": "VFIOUSER", 00:17:10.892 "adrfam": "IPv4", 00:17:10.892 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:10.892 "trsvcid": "0" 00:17:10.892 } 00:17:10.892 ], 00:17:10.892 "allow_any_host": true, 00:17:10.892 "hosts": [], 00:17:10.892 "serial_number": "SPDK1", 00:17:10.892 "model_number": "SPDK bdev Controller", 00:17:10.892 "max_namespaces": 32, 00:17:10.892 "min_cntlid": 1, 00:17:10.892 "max_cntlid": 65519, 00:17:10.892 "namespaces": [ 00:17:10.892 { 00:17:10.892 "nsid": 1, 00:17:10.892 "bdev_name": "Malloc1", 00:17:10.892 "name": "Malloc1", 00:17:10.892 "nguid": "8C91EA21333B4132881D368D9A09D1E9", 00:17:10.892 "uuid": "8c91ea21-333b-4132-881d-368d9a09d1e9" 00:17:10.892 } 00:17:10.892 ] 00:17:10.892 }, 00:17:10.892 { 00:17:10.892 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:10.892 "subtype": "NVMe", 00:17:10.892 "listen_addresses": [ 00:17:10.892 { 00:17:10.892 "trtype": "VFIOUSER", 00:17:10.892 "adrfam": "IPv4", 00:17:10.892 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:10.892 "trsvcid": "0" 00:17:10.892 } 00:17:10.892 ], 00:17:10.892 "allow_any_host": true, 00:17:10.892 "hosts": [], 00:17:10.892 "serial_number": "SPDK2", 00:17:10.892 "model_number": "SPDK bdev Controller", 00:17:10.892 "max_namespaces": 32, 00:17:10.892 "min_cntlid": 1, 00:17:10.892 "max_cntlid": 65519, 00:17:10.892 "namespaces": [ 00:17:10.892 { 00:17:10.892 "nsid": 1, 00:17:10.892 "bdev_name": "Malloc2", 00:17:10.892 "name": "Malloc2", 00:17:10.892 "nguid": "FB4EB22E62C74A91859F60885CC7D5A5", 00:17:10.892 "uuid": "fb4eb22e-62c7-4a91-859f-60885cc7d5a5" 00:17:10.892 } 00:17:10.892 ] 00:17:10.892 } 00:17:10.892 ] 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=824896 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:11.149 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:11.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:11.150 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.150 [2024-07-25 04:00:26.344719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:11.406 Malloc3 00:17:11.406 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:11.406 [2024-07-25 04:00:26.700325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:11.664 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:11.664 Asynchronous Event Request test 00:17:11.664 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:11.664 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:11.664 Registering asynchronous event callbacks... 00:17:11.664 Starting namespace attribute notice tests for all controllers... 00:17:11.664 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:11.664 aer_cb - Changed Namespace 00:17:11.664 Cleaning up... 00:17:11.664 [ 00:17:11.664 { 00:17:11.664 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:11.664 "subtype": "Discovery", 00:17:11.664 "listen_addresses": [], 00:17:11.664 "allow_any_host": true, 00:17:11.664 "hosts": [] 00:17:11.664 }, 00:17:11.664 { 00:17:11.664 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:11.664 "subtype": "NVMe", 00:17:11.664 "listen_addresses": [ 00:17:11.664 { 00:17:11.664 "trtype": "VFIOUSER", 00:17:11.664 "adrfam": "IPv4", 00:17:11.664 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:11.664 "trsvcid": "0" 00:17:11.664 } 00:17:11.664 ], 00:17:11.664 "allow_any_host": true, 00:17:11.664 "hosts": [], 00:17:11.664 "serial_number": "SPDK1", 00:17:11.664 "model_number": "SPDK bdev Controller", 00:17:11.664 "max_namespaces": 32, 00:17:11.664 "min_cntlid": 1, 00:17:11.664 "max_cntlid": 65519, 00:17:11.664 "namespaces": [ 00:17:11.664 { 00:17:11.664 "nsid": 1, 00:17:11.664 "bdev_name": "Malloc1", 00:17:11.664 "name": "Malloc1", 00:17:11.664 "nguid": "8C91EA21333B4132881D368D9A09D1E9", 00:17:11.664 "uuid": "8c91ea21-333b-4132-881d-368d9a09d1e9" 00:17:11.664 }, 00:17:11.664 { 00:17:11.664 "nsid": 2, 00:17:11.664 "bdev_name": "Malloc3", 00:17:11.664 "name": "Malloc3", 00:17:11.664 "nguid": "525DD13833CF480EB3587995E64B18C2", 00:17:11.664 "uuid": "525dd138-33cf-480e-b358-7995e64b18c2" 00:17:11.664 } 00:17:11.664 ] 00:17:11.664 }, 00:17:11.664 { 00:17:11.664 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:11.664 "subtype": "NVMe", 00:17:11.664 "listen_addresses": [ 00:17:11.664 { 00:17:11.664 "trtype": "VFIOUSER", 00:17:11.664 "adrfam": "IPv4", 00:17:11.664 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:11.664 "trsvcid": "0" 00:17:11.664 } 00:17:11.664 ], 00:17:11.664 "allow_any_host": true, 00:17:11.664 "hosts": [], 00:17:11.664 "serial_number": "SPDK2", 00:17:11.664 "model_number": "SPDK bdev Controller", 00:17:11.664 "max_namespaces": 32, 00:17:11.664 "min_cntlid": 1, 00:17:11.664 "max_cntlid": 65519, 00:17:11.664 "namespaces": [ 00:17:11.664 { 00:17:11.664 "nsid": 1, 00:17:11.664 "bdev_name": "Malloc2", 00:17:11.664 "name": "Malloc2", 00:17:11.664 "nguid": "FB4EB22E62C74A91859F60885CC7D5A5", 00:17:11.664 "uuid": "fb4eb22e-62c7-4a91-859f-60885cc7d5a5" 00:17:11.664 } 00:17:11.664 ] 00:17:11.664 } 00:17:11.664 ] 00:17:11.923 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 824896 00:17:11.923 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:11.923 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:11.923 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:11.923 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:11.923 [2024-07-25 04:00:26.985821] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:17:11.923 [2024-07-25 04:00:26.985866] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824909 ] 00:17:11.923 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.923 [2024-07-25 04:00:27.003896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.923 [2024-07-25 04:00:27.021429] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:11.923 [2024-07-25 04:00:27.027524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:11.923 [2024-07-25 04:00:27.027573] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f15cd27b000 00:17:11.923 [2024-07-25 04:00:27.028519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.029527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.030534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.031539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.032559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.033568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.034576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.035594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.923 [2024-07-25 04:00:27.036610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:11.923 [2024-07-25 04:00:27.036632] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f15cc03d000 00:17:11.923 [2024-07-25 04:00:27.037745] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:11.923 [2024-07-25 04:00:27.051906] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:11.923 [2024-07-25 04:00:27.051940] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:11.923 [2024-07-25 04:00:27.057033] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:11.923 [2024-07-25 04:00:27.057088] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:11.923 [2024-07-25 04:00:27.057177] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:11.923 [2024-07-25 04:00:27.057198] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:11.923 [2024-07-25 04:00:27.057208] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:11.923 [2024-07-25 04:00:27.058035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:11.923 [2024-07-25 04:00:27.058059] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:11.923 [2024-07-25 04:00:27.058073] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:11.923 [2024-07-25 04:00:27.059042] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:11.923 [2024-07-25 04:00:27.059061] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:11.923 [2024-07-25 04:00:27.059074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.060049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:11.923 [2024-07-25 04:00:27.060068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.061074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:11.923 [2024-07-25 04:00:27.061094] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:11.923 [2024-07-25 04:00:27.061103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.061115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.061236] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:11.923 [2024-07-25 04:00:27.061251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.061261] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:11.923 [2024-07-25 04:00:27.062067] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:11.923 [2024-07-25 04:00:27.063070] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:11.923 [2024-07-25 04:00:27.064075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:11.923 [2024-07-25 04:00:27.065072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:11.923 [2024-07-25 04:00:27.065154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:11.923 [2024-07-25 04:00:27.066095] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:11.923 [2024-07-25 04:00:27.066114] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:11.923 [2024-07-25 04:00:27.066124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:11.923 [2024-07-25 04:00:27.066147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:11.923 [2024-07-25 04:00:27.066161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:11.923 [2024-07-25 04:00:27.066197] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:11.923 [2024-07-25 04:00:27.066208] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:11.923 [2024-07-25 04:00:27.066215] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.923 [2024-07-25 04:00:27.066233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:11.923 [2024-07-25 04:00:27.074257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:11.923 [2024-07-25 04:00:27.074280] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:11.923 [2024-07-25 04:00:27.074289] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:11.923 [2024-07-25 04:00:27.074297] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:11.923 [2024-07-25 04:00:27.074304] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:11.923 [2024-07-25 04:00:27.074312] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:11.923 [2024-07-25 04:00:27.074320] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:11.923 [2024-07-25 04:00:27.074328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:11.923 [2024-07-25 04:00:27.074341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:11.923 [2024-07-25 04:00:27.074361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:11.923 [2024-07-25 04:00:27.082253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.082284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.924 [2024-07-25 04:00:27.082300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.924 [2024-07-25 04:00:27.082313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.924 [2024-07-25 04:00:27.082329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.924 [2024-07-25 04:00:27.082339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.082355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.082370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.090256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.090285] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:11.924 [2024-07-25 04:00:27.090295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.090310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.090322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.090336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.098258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.098346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.098363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.098376] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:11.924 [2024-07-25 04:00:27.098385] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:11.924 [2024-07-25 04:00:27.098391] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.098402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.106254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.106303] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:11.924 [2024-07-25 04:00:27.106325] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.106341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.106355] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:11.924 [2024-07-25 04:00:27.106364] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:11.924 [2024-07-25 04:00:27.106371] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.106381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.114255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.114314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.114332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.114346] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:11.924 [2024-07-25 04:00:27.114355] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:11.924 [2024-07-25 04:00:27.114361] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.114372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.122252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.122284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122353] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122370] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:11.924 [2024-07-25 04:00:27.122378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:11.924 [2024-07-25 04:00:27.122387] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:11.924 [2024-07-25 04:00:27.122412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.130306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.138268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.138300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.146255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.146279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.154257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.154288] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:11.924 [2024-07-25 04:00:27.154304] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:11.924 [2024-07-25 04:00:27.154311] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:11.924 [2024-07-25 04:00:27.154317] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:11.924 [2024-07-25 04:00:27.154323] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:11.924 [2024-07-25 04:00:27.154333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:11.924 [2024-07-25 04:00:27.154344] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:11.924 [2024-07-25 04:00:27.154353] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:11.924 [2024-07-25 04:00:27.154359] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.154368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.154379] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:11.924 [2024-07-25 04:00:27.154387] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:11.924 [2024-07-25 04:00:27.154393] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.154402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.154413] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:11.924 [2024-07-25 04:00:27.154421] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:11.924 [2024-07-25 04:00:27.154427] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:11.924 [2024-07-25 04:00:27.154437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:11.924 [2024-07-25 04:00:27.162268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.162295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.162312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:11.924 [2024-07-25 04:00:27.162325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:11.924 ===================================================== 00:17:11.924 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:11.924 ===================================================== 00:17:11.924 Controller Capabilities/Features 00:17:11.924 ================================ 00:17:11.924 Vendor ID: 4e58 00:17:11.924 Subsystem Vendor ID: 4e58 00:17:11.924 Serial Number: SPDK2 00:17:11.924 Model Number: SPDK bdev Controller 00:17:11.924 Firmware Version: 24.09 00:17:11.924 Recommended Arb Burst: 6 00:17:11.924 IEEE OUI Identifier: 8d 6b 50 00:17:11.924 Multi-path I/O 00:17:11.924 May have multiple subsystem ports: Yes 00:17:11.925 May have multiple controllers: Yes 00:17:11.925 Associated with SR-IOV VF: No 00:17:11.925 Max Data Transfer Size: 131072 00:17:11.925 Max Number of Namespaces: 32 00:17:11.925 Max Number of I/O Queues: 127 00:17:11.925 NVMe Specification Version (VS): 1.3 00:17:11.925 NVMe Specification Version (Identify): 1.3 00:17:11.925 Maximum Queue Entries: 256 00:17:11.925 Contiguous Queues Required: Yes 00:17:11.925 Arbitration Mechanisms Supported 00:17:11.925 Weighted Round Robin: Not Supported 00:17:11.925 Vendor Specific: Not Supported 00:17:11.925 Reset Timeout: 15000 ms 00:17:11.925 Doorbell Stride: 4 bytes 00:17:11.925 NVM Subsystem Reset: Not Supported 00:17:11.925 Command Sets Supported 00:17:11.925 NVM Command Set: Supported 00:17:11.925 Boot Partition: Not Supported 00:17:11.925 Memory Page Size Minimum: 4096 bytes 00:17:11.925 Memory Page Size Maximum: 4096 bytes 00:17:11.925 Persistent Memory Region: Not Supported 00:17:11.925 Optional Asynchronous Events Supported 00:17:11.925 Namespace Attribute Notices: Supported 00:17:11.925 Firmware Activation Notices: Not Supported 00:17:11.925 ANA Change Notices: Not Supported 00:17:11.925 PLE Aggregate Log Change Notices: Not Supported 00:17:11.925 LBA Status Info Alert Notices: Not Supported 00:17:11.925 EGE Aggregate Log Change Notices: Not Supported 00:17:11.925 Normal NVM Subsystem Shutdown event: Not Supported 00:17:11.925 Zone Descriptor Change Notices: Not Supported 00:17:11.925 Discovery Log Change Notices: Not Supported 00:17:11.925 Controller Attributes 00:17:11.925 128-bit Host Identifier: Supported 00:17:11.925 Non-Operational Permissive Mode: Not Supported 00:17:11.925 NVM Sets: Not Supported 00:17:11.925 Read Recovery Levels: Not Supported 00:17:11.925 Endurance Groups: Not Supported 00:17:11.925 Predictable Latency Mode: Not Supported 00:17:11.925 Traffic Based Keep ALive: Not Supported 00:17:11.925 Namespace Granularity: Not Supported 00:17:11.925 SQ Associations: Not Supported 00:17:11.925 UUID List: Not Supported 00:17:11.925 Multi-Domain Subsystem: Not Supported 00:17:11.925 Fixed Capacity Management: Not Supported 00:17:11.925 Variable Capacity Management: Not Supported 00:17:11.925 Delete Endurance Group: Not Supported 00:17:11.925 Delete NVM Set: Not Supported 00:17:11.925 Extended LBA Formats Supported: Not Supported 00:17:11.925 Flexible Data Placement Supported: Not Supported 00:17:11.925 00:17:11.925 Controller Memory Buffer Support 00:17:11.925 ================================ 00:17:11.925 Supported: No 00:17:11.925 00:17:11.925 Persistent Memory Region Support 00:17:11.925 ================================ 00:17:11.925 Supported: No 00:17:11.925 00:17:11.925 Admin Command Set Attributes 00:17:11.925 ============================ 00:17:11.925 Security Send/Receive: Not Supported 00:17:11.925 Format NVM: Not Supported 00:17:11.925 Firmware Activate/Download: Not Supported 00:17:11.925 Namespace Management: Not Supported 00:17:11.925 Device Self-Test: Not Supported 00:17:11.925 Directives: Not Supported 00:17:11.925 NVMe-MI: Not Supported 00:17:11.925 Virtualization Management: Not Supported 00:17:11.925 Doorbell Buffer Config: Not Supported 00:17:11.925 Get LBA Status Capability: Not Supported 00:17:11.925 Command & Feature Lockdown Capability: Not Supported 00:17:11.925 Abort Command Limit: 4 00:17:11.925 Async Event Request Limit: 4 00:17:11.925 Number of Firmware Slots: N/A 00:17:11.925 Firmware Slot 1 Read-Only: N/A 00:17:11.925 Firmware Activation Without Reset: N/A 00:17:11.925 Multiple Update Detection Support: N/A 00:17:11.925 Firmware Update Granularity: No Information Provided 00:17:11.925 Per-Namespace SMART Log: No 00:17:11.925 Asymmetric Namespace Access Log Page: Not Supported 00:17:11.925 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:11.925 Command Effects Log Page: Supported 00:17:11.925 Get Log Page Extended Data: Supported 00:17:11.925 Telemetry Log Pages: Not Supported 00:17:11.925 Persistent Event Log Pages: Not Supported 00:17:11.925 Supported Log Pages Log Page: May Support 00:17:11.925 Commands Supported & Effects Log Page: Not Supported 00:17:11.925 Feature Identifiers & Effects Log Page:May Support 00:17:11.925 NVMe-MI Commands & Effects Log Page: May Support 00:17:11.925 Data Area 4 for Telemetry Log: Not Supported 00:17:11.925 Error Log Page Entries Supported: 128 00:17:11.925 Keep Alive: Supported 00:17:11.925 Keep Alive Granularity: 10000 ms 00:17:11.925 00:17:11.925 NVM Command Set Attributes 00:17:11.925 ========================== 00:17:11.925 Submission Queue Entry Size 00:17:11.925 Max: 64 00:17:11.925 Min: 64 00:17:11.925 Completion Queue Entry Size 00:17:11.925 Max: 16 00:17:11.925 Min: 16 00:17:11.925 Number of Namespaces: 32 00:17:11.925 Compare Command: Supported 00:17:11.925 Write Uncorrectable Command: Not Supported 00:17:11.925 Dataset Management Command: Supported 00:17:11.925 Write Zeroes Command: Supported 00:17:11.925 Set Features Save Field: Not Supported 00:17:11.925 Reservations: Not Supported 00:17:11.925 Timestamp: Not Supported 00:17:11.925 Copy: Supported 00:17:11.925 Volatile Write Cache: Present 00:17:11.925 Atomic Write Unit (Normal): 1 00:17:11.925 Atomic Write Unit (PFail): 1 00:17:11.925 Atomic Compare & Write Unit: 1 00:17:11.925 Fused Compare & Write: Supported 00:17:11.925 Scatter-Gather List 00:17:11.925 SGL Command Set: Supported (Dword aligned) 00:17:11.925 SGL Keyed: Not Supported 00:17:11.925 SGL Bit Bucket Descriptor: Not Supported 00:17:11.925 SGL Metadata Pointer: Not Supported 00:17:11.925 Oversized SGL: Not Supported 00:17:11.925 SGL Metadata Address: Not Supported 00:17:11.925 SGL Offset: Not Supported 00:17:11.925 Transport SGL Data Block: Not Supported 00:17:11.925 Replay Protected Memory Block: Not Supported 00:17:11.925 00:17:11.925 Firmware Slot Information 00:17:11.925 ========================= 00:17:11.925 Active slot: 1 00:17:11.925 Slot 1 Firmware Revision: 24.09 00:17:11.925 00:17:11.925 00:17:11.925 Commands Supported and Effects 00:17:11.925 ============================== 00:17:11.925 Admin Commands 00:17:11.925 -------------- 00:17:11.925 Get Log Page (02h): Supported 00:17:11.925 Identify (06h): Supported 00:17:11.925 Abort (08h): Supported 00:17:11.925 Set Features (09h): Supported 00:17:11.925 Get Features (0Ah): Supported 00:17:11.925 Asynchronous Event Request (0Ch): Supported 00:17:11.925 Keep Alive (18h): Supported 00:17:11.925 I/O Commands 00:17:11.925 ------------ 00:17:11.925 Flush (00h): Supported LBA-Change 00:17:11.925 Write (01h): Supported LBA-Change 00:17:11.925 Read (02h): Supported 00:17:11.925 Compare (05h): Supported 00:17:11.925 Write Zeroes (08h): Supported LBA-Change 00:17:11.925 Dataset Management (09h): Supported LBA-Change 00:17:11.925 Copy (19h): Supported LBA-Change 00:17:11.925 00:17:11.925 Error Log 00:17:11.925 ========= 00:17:11.925 00:17:11.925 Arbitration 00:17:11.925 =========== 00:17:11.925 Arbitration Burst: 1 00:17:11.925 00:17:11.925 Power Management 00:17:11.925 ================ 00:17:11.925 Number of Power States: 1 00:17:11.925 Current Power State: Power State #0 00:17:11.925 Power State #0: 00:17:11.925 Max Power: 0.00 W 00:17:11.925 Non-Operational State: Operational 00:17:11.925 Entry Latency: Not Reported 00:17:11.925 Exit Latency: Not Reported 00:17:11.925 Relative Read Throughput: 0 00:17:11.925 Relative Read Latency: 0 00:17:11.925 Relative Write Throughput: 0 00:17:11.925 Relative Write Latency: 0 00:17:11.925 Idle Power: Not Reported 00:17:11.925 Active Power: Not Reported 00:17:11.925 Non-Operational Permissive Mode: Not Supported 00:17:11.925 00:17:11.925 Health Information 00:17:11.925 ================== 00:17:11.925 Critical Warnings: 00:17:11.925 Available Spare Space: OK 00:17:11.925 Temperature: OK 00:17:11.925 Device Reliability: OK 00:17:11.925 Read Only: No 00:17:11.925 Volatile Memory Backup: OK 00:17:11.925 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:11.925 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:11.925 Available Spare: 0% 00:17:11.925 Available Sp[2024-07-25 04:00:27.162439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:11.925 [2024-07-25 04:00:27.170270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:11.926 [2024-07-25 04:00:27.170321] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:11.926 [2024-07-25 04:00:27.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.926 [2024-07-25 04:00:27.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.926 [2024-07-25 04:00:27.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.926 [2024-07-25 04:00:27.170372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.926 [2024-07-25 04:00:27.170455] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:11.926 [2024-07-25 04:00:27.170480] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:11.926 [2024-07-25 04:00:27.171463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.926 [2024-07-25 04:00:27.171551] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:11.926 [2024-07-25 04:00:27.171580] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:11.926 [2024-07-25 04:00:27.172474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:11.926 [2024-07-25 04:00:27.172498] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:11.926 [2024-07-25 04:00:27.172565] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:11.926 [2024-07-25 04:00:27.173767] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:11.926 are Threshold: 0% 00:17:11.926 Life Percentage Used: 0% 00:17:11.926 Data Units Read: 0 00:17:11.926 Data Units Written: 0 00:17:11.926 Host Read Commands: 0 00:17:11.926 Host Write Commands: 0 00:17:11.926 Controller Busy Time: 0 minutes 00:17:11.926 Power Cycles: 0 00:17:11.926 Power On Hours: 0 hours 00:17:11.926 Unsafe Shutdowns: 0 00:17:11.926 Unrecoverable Media Errors: 0 00:17:11.926 Lifetime Error Log Entries: 0 00:17:11.926 Warning Temperature Time: 0 minutes 00:17:11.926 Critical Temperature Time: 0 minutes 00:17:11.926 00:17:11.926 Number of Queues 00:17:11.926 ================ 00:17:11.926 Number of I/O Submission Queues: 127 00:17:11.926 Number of I/O Completion Queues: 127 00:17:11.926 00:17:11.926 Active Namespaces 00:17:11.926 ================= 00:17:11.926 Namespace ID:1 00:17:11.926 Error Recovery Timeout: Unlimited 00:17:11.926 Command Set Identifier: NVM (00h) 00:17:11.926 Deallocate: Supported 00:17:11.926 Deallocated/Unwritten Error: Not Supported 00:17:11.926 Deallocated Read Value: Unknown 00:17:11.926 Deallocate in Write Zeroes: Not Supported 00:17:11.926 Deallocated Guard Field: 0xFFFF 00:17:11.926 Flush: Supported 00:17:11.926 Reservation: Supported 00:17:11.926 Namespace Sharing Capabilities: Multiple Controllers 00:17:11.926 Size (in LBAs): 131072 (0GiB) 00:17:11.926 Capacity (in LBAs): 131072 (0GiB) 00:17:11.926 Utilization (in LBAs): 131072 (0GiB) 00:17:11.926 NGUID: FB4EB22E62C74A91859F60885CC7D5A5 00:17:11.926 UUID: fb4eb22e-62c7-4a91-859f-60885cc7d5a5 00:17:11.926 Thin Provisioning: Not Supported 00:17:11.926 Per-NS Atomic Units: Yes 00:17:11.926 Atomic Boundary Size (Normal): 0 00:17:11.926 Atomic Boundary Size (PFail): 0 00:17:11.926 Atomic Boundary Offset: 0 00:17:11.926 Maximum Single Source Range Length: 65535 00:17:11.926 Maximum Copy Length: 65535 00:17:11.926 Maximum Source Range Count: 1 00:17:11.926 NGUID/EUI64 Never Reused: No 00:17:11.926 Namespace Write Protected: No 00:17:11.926 Number of LBA Formats: 1 00:17:11.926 Current LBA Format: LBA Format #00 00:17:11.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:11.926 00:17:11.926 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:12.183 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.183 [2024-07-25 04:00:27.406156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.439 Initializing NVMe Controllers 00:17:17.439 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:17.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:17.439 Initialization complete. Launching workers. 00:17:17.439 ======================================================== 00:17:17.439 Latency(us) 00:17:17.439 Device Information : IOPS MiB/s Average min max 00:17:17.439 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35617.37 139.13 3593.22 1148.21 9672.62 00:17:17.439 ======================================================== 00:17:17.439 Total : 35617.37 139.13 3593.22 1148.21 9672.62 00:17:17.439 00:17:17.439 [2024-07-25 04:00:32.505595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.439 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:17.439 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.439 [2024-07-25 04:00:32.737273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:22.696 Initializing NVMe Controllers 00:17:22.696 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:22.696 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:22.696 Initialization complete. Launching workers. 00:17:22.696 ======================================================== 00:17:22.696 Latency(us) 00:17:22.696 Device Information : IOPS MiB/s Average min max 00:17:22.696 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32857.77 128.35 3895.51 1189.38 8689.86 00:17:22.696 ======================================================== 00:17:22.696 Total : 32857.77 128.35 3895.51 1189.38 8689.86 00:17:22.696 00:17:22.696 [2024-07-25 04:00:37.759833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:22.696 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:22.696 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.696 [2024-07-25 04:00:37.979649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:27.959 [2024-07-25 04:00:43.109381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:27.959 Initializing NVMe Controllers 00:17:27.959 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:27.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:27.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:27.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:27.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:27.959 Initialization complete. Launching workers. 00:17:27.959 Starting thread on core 2 00:17:27.959 Starting thread on core 3 00:17:27.959 Starting thread on core 1 00:17:27.959 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:27.959 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.215 [2024-07-25 04:00:43.423789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.493 [2024-07-25 04:00:46.503309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.493 Initializing NVMe Controllers 00:17:31.493 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.493 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.493 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:31.493 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:31.493 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:31.493 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:31.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:31.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:31.493 Initialization complete. Launching workers. 00:17:31.493 Starting thread on core 1 with urgent priority queue 00:17:31.493 Starting thread on core 2 with urgent priority queue 00:17:31.493 Starting thread on core 3 with urgent priority queue 00:17:31.493 Starting thread on core 0 with urgent priority queue 00:17:31.493 SPDK bdev Controller (SPDK2 ) core 0: 4854.67 IO/s 20.60 secs/100000 ios 00:17:31.493 SPDK bdev Controller (SPDK2 ) core 1: 5406.67 IO/s 18.50 secs/100000 ios 00:17:31.493 SPDK bdev Controller (SPDK2 ) core 2: 5404.00 IO/s 18.50 secs/100000 ios 00:17:31.493 SPDK bdev Controller (SPDK2 ) core 3: 5582.00 IO/s 17.91 secs/100000 ios 00:17:31.493 ======================================================== 00:17:31.493 00:17:31.493 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:31.493 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.750 [2024-07-25 04:00:46.794766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.750 Initializing NVMe Controllers 00:17:31.750 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.750 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.750 Namespace ID: 1 size: 0GB 00:17:31.750 Initialization complete. 00:17:31.750 INFO: using host memory buffer for IO 00:17:31.750 Hello world! 00:17:31.750 [2024-07-25 04:00:46.802821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.751 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:31.751 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.038 [2024-07-25 04:00:47.085903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:32.971 Initializing NVMe Controllers 00:17:32.971 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:32.971 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:32.971 Initialization complete. Launching workers. 00:17:32.971 submit (in ns) avg, min, max = 7214.3, 3570.0, 4019956.7 00:17:32.971 complete (in ns) avg, min, max = 26961.0, 2095.6, 4017783.3 00:17:32.971 00:17:32.971 Submit histogram 00:17:32.971 ================ 00:17:32.971 Range in us Cumulative Count 00:17:32.971 3.556 - 3.579: 0.2356% ( 32) 00:17:32.971 3.579 - 3.603: 1.0751% ( 114) 00:17:32.971 3.603 - 3.627: 2.7688% ( 230) 00:17:32.971 3.627 - 3.650: 7.2754% ( 612) 00:17:32.971 3.650 - 3.674: 13.5714% ( 855) 00:17:32.971 3.674 - 3.698: 22.8424% ( 1259) 00:17:32.971 3.698 - 3.721: 31.4728% ( 1172) 00:17:32.971 3.721 - 3.745: 40.1473% ( 1178) 00:17:32.971 3.745 - 3.769: 47.0839% ( 942) 00:17:32.971 3.769 - 3.793: 53.7261% ( 902) 00:17:32.971 3.793 - 3.816: 59.2047% ( 744) 00:17:32.971 3.816 - 3.840: 63.6303% ( 601) 00:17:32.971 3.840 - 3.864: 68.2032% ( 621) 00:17:32.971 3.864 - 3.887: 71.6789% ( 472) 00:17:32.971 3.887 - 3.911: 75.1915% ( 477) 00:17:32.971 3.911 - 3.935: 79.0722% ( 527) 00:17:32.972 3.935 - 3.959: 82.2754% ( 435) 00:17:32.972 3.959 - 3.982: 85.0442% ( 376) 00:17:32.972 3.982 - 4.006: 87.4448% ( 326) 00:17:32.972 4.006 - 4.030: 89.2931% ( 251) 00:17:32.972 4.030 - 4.053: 90.7732% ( 201) 00:17:32.972 4.053 - 4.077: 92.1060% ( 181) 00:17:32.972 4.077 - 4.101: 93.1517% ( 142) 00:17:32.972 4.101 - 4.124: 93.9028% ( 102) 00:17:32.972 4.124 - 4.148: 94.5803% ( 92) 00:17:32.972 4.148 - 4.172: 95.1399% ( 76) 00:17:32.972 4.172 - 4.196: 95.5596% ( 57) 00:17:32.972 4.196 - 4.219: 95.9426% ( 52) 00:17:32.972 4.219 - 4.243: 96.2077% ( 36) 00:17:32.972 4.243 - 4.267: 96.4212% ( 29) 00:17:32.972 4.267 - 4.290: 96.6421% ( 30) 00:17:32.972 4.290 - 4.314: 96.7599% ( 16) 00:17:32.972 4.314 - 4.338: 96.8851% ( 17) 00:17:32.972 4.338 - 4.361: 97.0029% ( 16) 00:17:32.972 4.361 - 4.385: 97.0692% ( 9) 00:17:32.972 4.385 - 4.409: 97.1502% ( 11) 00:17:32.972 4.409 - 4.433: 97.2754% ( 17) 00:17:32.972 4.433 - 4.456: 97.3711% ( 13) 00:17:32.972 4.456 - 4.480: 97.3859% ( 2) 00:17:32.972 4.480 - 4.504: 97.4595% ( 10) 00:17:32.972 4.504 - 4.527: 97.5552% ( 13) 00:17:32.972 4.527 - 4.551: 97.5773% ( 3) 00:17:32.972 4.551 - 4.575: 97.5847% ( 1) 00:17:32.972 4.599 - 4.622: 97.6068% ( 3) 00:17:32.972 4.622 - 4.646: 97.6215% ( 2) 00:17:32.972 4.670 - 4.693: 97.6289% ( 1) 00:17:32.972 4.693 - 4.717: 97.6436% ( 2) 00:17:32.972 4.717 - 4.741: 97.6583% ( 2) 00:17:32.972 4.741 - 4.764: 97.6878% ( 4) 00:17:32.972 4.764 - 4.788: 97.7172% ( 4) 00:17:32.972 4.788 - 4.812: 97.7246% ( 1) 00:17:32.972 4.812 - 4.836: 97.7393% ( 2) 00:17:32.972 4.836 - 4.859: 97.8056% ( 9) 00:17:32.972 4.859 - 4.883: 97.8498% ( 6) 00:17:32.972 4.883 - 4.907: 97.9234% ( 10) 00:17:32.972 4.907 - 4.930: 97.9823% ( 8) 00:17:32.972 4.930 - 4.954: 98.0265% ( 6) 00:17:32.972 4.954 - 4.978: 98.0633% ( 5) 00:17:32.972 4.978 - 5.001: 98.1075% ( 6) 00:17:32.972 5.001 - 5.025: 98.1222% ( 2) 00:17:32.972 5.025 - 5.049: 98.1959% ( 10) 00:17:32.972 5.049 - 5.073: 98.2253% ( 4) 00:17:32.972 5.073 - 5.096: 98.2769% ( 7) 00:17:32.972 5.096 - 5.120: 98.2916% ( 2) 00:17:32.972 5.144 - 5.167: 98.2990% ( 1) 00:17:32.972 5.167 - 5.191: 98.3137% ( 2) 00:17:32.972 5.191 - 5.215: 98.3579% ( 6) 00:17:32.972 5.215 - 5.239: 98.3800% ( 3) 00:17:32.972 5.262 - 5.286: 98.3873% ( 1) 00:17:32.972 5.286 - 5.310: 98.4021% ( 2) 00:17:32.972 5.310 - 5.333: 98.4242% ( 3) 00:17:32.972 5.357 - 5.381: 98.4315% ( 1) 00:17:32.972 5.381 - 5.404: 98.4389% ( 1) 00:17:32.972 5.404 - 5.428: 98.4462% ( 1) 00:17:32.972 5.428 - 5.452: 98.4536% ( 1) 00:17:32.972 5.523 - 5.547: 98.4683% ( 2) 00:17:32.972 5.547 - 5.570: 98.4757% ( 1) 00:17:32.972 5.570 - 5.594: 98.4831% ( 1) 00:17:32.972 5.594 - 5.618: 98.4904% ( 1) 00:17:32.972 5.618 - 5.641: 98.4978% ( 1) 00:17:32.972 5.665 - 5.689: 98.5052% ( 1) 00:17:32.972 5.760 - 5.784: 98.5125% ( 1) 00:17:32.972 5.784 - 5.807: 98.5272% ( 2) 00:17:32.972 5.950 - 5.973: 98.5346% ( 1) 00:17:32.972 6.021 - 6.044: 98.5420% ( 1) 00:17:32.972 6.068 - 6.116: 98.5493% ( 1) 00:17:32.972 6.116 - 6.163: 98.5567% ( 1) 00:17:32.972 6.353 - 6.400: 98.5641% ( 1) 00:17:32.972 6.400 - 6.447: 98.5788% ( 2) 00:17:32.972 6.447 - 6.495: 98.5862% ( 1) 00:17:32.972 6.637 - 6.684: 98.5935% ( 1) 00:17:32.972 6.732 - 6.779: 98.6009% ( 1) 00:17:32.972 6.779 - 6.827: 98.6156% ( 2) 00:17:32.972 6.874 - 6.921: 98.6303% ( 2) 00:17:32.972 6.969 - 7.016: 98.6377% ( 1) 00:17:32.972 7.016 - 7.064: 98.6524% ( 2) 00:17:32.972 7.064 - 7.111: 98.6598% ( 1) 00:17:32.972 7.111 - 7.159: 98.6672% ( 1) 00:17:32.972 7.206 - 7.253: 98.6745% ( 1) 00:17:32.972 7.253 - 7.301: 98.6819% ( 1) 00:17:32.972 7.301 - 7.348: 98.6892% ( 1) 00:17:32.972 7.348 - 7.396: 98.7040% ( 2) 00:17:32.972 7.396 - 7.443: 98.7187% ( 2) 00:17:32.972 7.443 - 7.490: 98.7261% ( 1) 00:17:32.972 7.585 - 7.633: 98.7334% ( 1) 00:17:32.972 7.680 - 7.727: 98.7482% ( 2) 00:17:32.972 7.917 - 7.964: 98.7555% ( 1) 00:17:32.972 7.964 - 8.012: 98.7703% ( 2) 00:17:32.972 8.059 - 8.107: 98.7776% ( 1) 00:17:32.972 8.107 - 8.154: 98.7923% ( 2) 00:17:32.972 8.154 - 8.201: 98.7997% ( 1) 00:17:32.972 8.201 - 8.249: 98.8071% ( 1) 00:17:32.972 8.249 - 8.296: 98.8144% ( 1) 00:17:32.972 8.439 - 8.486: 98.8292% ( 2) 00:17:32.972 8.533 - 8.581: 98.8365% ( 1) 00:17:32.972 8.628 - 8.676: 98.8513% ( 2) 00:17:32.972 8.676 - 8.723: 98.8586% ( 1) 00:17:32.972 8.960 - 9.007: 98.8660% ( 1) 00:17:32.972 9.055 - 9.102: 98.8733% ( 1) 00:17:32.972 9.292 - 9.339: 98.8954% ( 3) 00:17:32.972 9.481 - 9.529: 98.9028% ( 1) 00:17:32.972 9.576 - 9.624: 98.9102% ( 1) 00:17:32.972 9.671 - 9.719: 98.9175% ( 1) 00:17:32.972 9.719 - 9.766: 98.9249% ( 1) 00:17:32.972 9.956 - 10.003: 98.9396% ( 2) 00:17:32.972 10.098 - 10.145: 98.9543% ( 2) 00:17:32.972 10.335 - 10.382: 98.9617% ( 1) 00:17:32.972 10.382 - 10.430: 98.9764% ( 2) 00:17:32.972 10.951 - 10.999: 98.9838% ( 1) 00:17:32.972 11.188 - 11.236: 98.9912% ( 1) 00:17:32.972 11.662 - 11.710: 98.9985% ( 1) 00:17:32.972 11.852 - 11.899: 99.0059% ( 1) 00:17:32.972 11.899 - 11.947: 99.0133% ( 1) 00:17:32.972 12.705 - 12.800: 99.0206% ( 1) 00:17:32.972 12.895 - 12.990: 99.0280% ( 1) 00:17:32.972 12.990 - 13.084: 99.0353% ( 1) 00:17:32.972 13.084 - 13.179: 99.0427% ( 1) 00:17:32.972 13.369 - 13.464: 99.0574% ( 2) 00:17:32.972 13.559 - 13.653: 99.0869% ( 4) 00:17:32.972 14.033 - 14.127: 99.1016% ( 2) 00:17:32.972 14.317 - 14.412: 99.1163% ( 2) 00:17:32.972 14.507 - 14.601: 99.1237% ( 1) 00:17:32.972 14.886 - 14.981: 99.1311% ( 1) 00:17:32.972 14.981 - 15.076: 99.1384% ( 1) 00:17:32.972 15.360 - 15.455: 99.1458% ( 1) 00:17:32.972 17.256 - 17.351: 99.1605% ( 2) 00:17:32.972 17.351 - 17.446: 99.2047% ( 6) 00:17:32.972 17.446 - 17.541: 99.2268% ( 3) 00:17:32.972 17.541 - 17.636: 99.2415% ( 2) 00:17:32.972 17.636 - 17.730: 99.2784% ( 5) 00:17:32.972 17.730 - 17.825: 99.3299% ( 7) 00:17:32.972 17.825 - 17.920: 99.3446% ( 2) 00:17:32.972 17.920 - 18.015: 99.3962% ( 7) 00:17:32.972 18.015 - 18.110: 99.4256% ( 4) 00:17:32.972 18.110 - 18.204: 99.4919% ( 9) 00:17:32.972 18.204 - 18.299: 99.5729% ( 11) 00:17:32.972 18.299 - 18.394: 99.6024% ( 4) 00:17:32.972 18.394 - 18.489: 99.6318% ( 4) 00:17:32.972 18.489 - 18.584: 99.6760% ( 6) 00:17:32.972 18.584 - 18.679: 99.7202% ( 6) 00:17:32.972 18.679 - 18.773: 99.7570% ( 5) 00:17:32.972 18.773 - 18.868: 99.7791% ( 3) 00:17:32.972 18.868 - 18.963: 99.7938% ( 2) 00:17:32.972 19.058 - 19.153: 99.8159% ( 3) 00:17:32.972 19.153 - 19.247: 99.8454% ( 4) 00:17:32.972 19.247 - 19.342: 99.8527% ( 1) 00:17:32.972 19.342 - 19.437: 99.8675% ( 2) 00:17:32.972 19.437 - 19.532: 99.8748% ( 1) 00:17:32.972 19.721 - 19.816: 99.8822% ( 1) 00:17:32.972 21.713 - 21.807: 99.8895% ( 1) 00:17:32.972 22.756 - 22.850: 99.8969% ( 1) 00:17:32.972 23.040 - 23.135: 99.9043% ( 1) 00:17:32.972 24.462 - 24.652: 99.9116% ( 1) 00:17:32.972 28.634 - 28.824: 99.9190% ( 1) 00:17:32.972 3980.705 - 4004.978: 99.9779% ( 8) 00:17:32.972 4004.978 - 4029.250: 100.0000% ( 3) 00:17:32.972 00:17:32.972 Complete histogram 00:17:32.972 ================== 00:17:32.972 Range in us Cumulative Count 00:17:32.972 2.086 - 2.098: 0.0368% ( 5) 00:17:32.972 2.098 - 2.110: 17.0619% ( 2312) 00:17:32.972 2.110 - 2.121: 42.3270% ( 3431) 00:17:32.972 2.121 - 2.133: 43.4978% ( 159) 00:17:32.972 2.133 - 2.145: 52.4006% ( 1209) 00:17:32.972 2.145 - 2.157: 59.2047% ( 924) 00:17:32.972 2.157 - 2.169: 60.9352% ( 235) 00:17:32.972 2.169 - 2.181: 71.3623% ( 1416) 00:17:32.972 2.181 - 2.193: 78.1296% ( 919) 00:17:32.973 2.193 - 2.204: 79.0133% ( 120) 00:17:32.973 2.204 - 2.216: 83.9028% ( 664) 00:17:32.973 2.216 - 2.228: 86.8336% ( 398) 00:17:32.973 2.228 - 2.240: 87.5331% ( 95) 00:17:32.973 2.240 - 2.252: 89.6686% ( 290) 00:17:32.973 2.252 - 2.264: 92.5920% ( 397) 00:17:32.973 2.264 - 2.276: 93.3063% ( 97) 00:17:32.973 2.276 - 2.287: 94.1605% ( 116) 00:17:32.973 2.287 - 2.299: 94.8675% ( 96) 00:17:32.973 2.299 - 2.311: 94.9485% ( 11) 00:17:32.973 2.311 - 2.323: 95.2209% ( 37) 00:17:32.973 2.323 - 2.335: 95.7216% ( 68) 00:17:32.973 2.335 - 2.347: 95.9647% ( 33) 00:17:32.973 2.347 - 2.359: 96.0604% ( 13) 00:17:32.973 2.359 - 2.370: 96.0972% ( 5) 00:17:32.973 2.370 - 2.382: 96.1414% ( 6) 00:17:32.973 2.382 - 2.394: 96.1856% ( 6) 00:17:32.973 2.394 - 2.406: 96.2371% ( 7) 00:17:32.973 2.406 - 2.418: 96.3255% ( 12) 00:17:32.973 2.418 - 2.430: 96.3697% ( 6) 00:17:32.973 2.430 - 2.441: 96.4801% ( 15) 00:17:32.973 2.441 - 2.453: 96.5611% ( 11) 00:17:32.973 2.453 - 2.465: 96.6642% ( 14) 00:17:32.973 2.465 - 2.477: 96.8041% ( 19) 00:17:32.973 2.477 - 2.489: 96.9514% ( 20) 00:17:32.973 2.489 - 2.501: 97.0692% ( 16) 00:17:32.973 2.501 - 2.513: 97.2091% ( 19) 00:17:32.973 2.513 - 2.524: 97.4080% ( 27) 00:17:32.973 2.524 - 2.536: 97.5552% ( 20) 00:17:32.973 2.536 - 2.548: 97.7541% ( 27) 00:17:32.973 2.548 - 2.560: 97.8940% ( 19) 00:17:32.973 2.560 - 2.572: 98.0781% ( 25) 00:17:32.973 2.572 - 2.584: 98.1664% ( 12) 00:17:32.973 2.584 - 2.596: 98.2474% ( 11) 00:17:32.973 2.596 - 2.607: 98.3505% ( 14) 00:17:32.973 2.607 - 2.619: 98.4021% ( 7) 00:17:32.973 2.619 - 2.631: 98.4315% ( 4) 00:17:32.973 2.631 - 2.643: 98.4683% ( 5) 00:17:32.973 2.643 - 2.655: 98.4757% ( 1) 00:17:32.973 2.655 - 2.667: 98.4904% ( 2) 00:17:32.973 2.667 - 2.679: 98.4978% ( 1) 00:17:32.973 2.773 - 2.785: 98.5052% ( 1) 00:17:32.973 2.809 - 2.821: 98.5199% ( 2) 00:17:32.973 2.856 - 2.868: 98.5272% ( 1) 00:17:32.973 3.413 - 3.437: 98.5346% ( 1) 00:17:32.973 3.437 - 3.461: 98.5420% ( 1) 00:17:32.973 3.461 - 3.484: 98.5493% ( 1) 00:17:32.973 3.484 - 3.508: 98.5567% ( 1) 00:17:32.973 3.508 - 3.532: 98.5641% ( 1) 00:17:32.973 3.532 - 3.556: 98.5714% ( 1) 00:17:32.973 3.579 - 3.603: 98.5788% ( 1) 00:17:32.973 3.603 - 3.627: 98.5935% ( 2) 00:17:32.973 3.627 - 3.650: 98.6082% ( 2) 00:17:32.973 3.650 - 3.674: 98.6303% ( 3) 00:17:32.973 3.698 - 3.721: 98.6377% ( 1) 00:17:32.973 3.721 - 3.745: 98.6451% ( 1) 00:17:32.973 3.745 - 3.769: 98.6524% ( 1) 00:17:32.973 3.769 - 3.793: 98.6598% ( 1) 00:17:32.973 3.793 - 3.816: 98.6672% ( 1) 00:17:32.973 3.840 - 3.864: 98.6745% ( 1) 00:17:32.973 3.864 - 3.887: 98.6819% ( 1) 00:17:32.973 3.887 - 3.911: 98.6892% ( 1) 00:17:32.973 3.982 - 4.006: 98.6966% ( 1) 00:17:32.973 4.077 - 4.101: 98.7040% ( 1) 00:17:32.973 4.101 - 4.124: 98.7113% ( 1) 00:17:32.973 4.148 - 4.172: 98.7187% ( 1) 00:17:32.973 4.172 - 4.196: 9[2024-07-25 04:00:48.191065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:32.973 8.7261% ( 1) 00:17:32.973 4.433 - 4.456: 98.7334% ( 1) 00:17:32.973 5.665 - 5.689: 98.7408% ( 1) 00:17:32.973 6.068 - 6.116: 98.7482% ( 1) 00:17:32.973 6.116 - 6.163: 98.7629% ( 2) 00:17:32.973 6.210 - 6.258: 98.7776% ( 2) 00:17:32.973 6.447 - 6.495: 98.7850% ( 1) 00:17:32.973 6.590 - 6.637: 98.7923% ( 1) 00:17:32.973 7.016 - 7.064: 98.7997% ( 1) 00:17:32.973 7.206 - 7.253: 98.8144% ( 2) 00:17:32.973 7.253 - 7.301: 98.8218% ( 1) 00:17:32.973 7.348 - 7.396: 98.8292% ( 1) 00:17:32.973 7.396 - 7.443: 98.8365% ( 1) 00:17:32.973 7.490 - 7.538: 98.8439% ( 1) 00:17:32.973 8.107 - 8.154: 98.8513% ( 1) 00:17:32.973 8.439 - 8.486: 98.8586% ( 1) 00:17:32.973 8.628 - 8.676: 98.8660% ( 1) 00:17:32.973 9.197 - 9.244: 98.8733% ( 1) 00:17:32.973 12.041 - 12.089: 98.8807% ( 1) 00:17:32.973 12.089 - 12.136: 98.8881% ( 1) 00:17:32.973 15.455 - 15.550: 98.8954% ( 1) 00:17:32.973 15.550 - 15.644: 98.9028% ( 1) 00:17:32.973 15.834 - 15.929: 98.9102% ( 1) 00:17:32.973 15.929 - 16.024: 98.9323% ( 3) 00:17:32.973 16.024 - 16.119: 98.9470% ( 2) 00:17:32.973 16.119 - 16.213: 98.9617% ( 2) 00:17:32.973 16.213 - 16.308: 98.9912% ( 4) 00:17:32.973 16.308 - 16.403: 99.0280% ( 5) 00:17:32.973 16.403 - 16.498: 99.0501% ( 3) 00:17:32.973 16.498 - 16.593: 99.1016% ( 7) 00:17:32.973 16.593 - 16.687: 99.1532% ( 7) 00:17:32.973 16.687 - 16.782: 99.1973% ( 6) 00:17:32.973 16.782 - 16.877: 99.2121% ( 2) 00:17:32.973 16.877 - 16.972: 99.2489% ( 5) 00:17:32.973 16.972 - 17.067: 99.2563% ( 1) 00:17:32.973 17.067 - 17.161: 99.2784% ( 3) 00:17:32.973 17.161 - 17.256: 99.2931% ( 2) 00:17:32.973 17.351 - 17.446: 99.3078% ( 2) 00:17:32.973 17.636 - 17.730: 99.3152% ( 1) 00:17:32.973 17.730 - 17.825: 99.3225% ( 1) 00:17:32.973 17.825 - 17.920: 99.3299% ( 1) 00:17:32.973 17.920 - 18.015: 99.3373% ( 1) 00:17:32.973 18.015 - 18.110: 99.3446% ( 1) 00:17:32.973 18.584 - 18.679: 99.3520% ( 1) 00:17:32.973 22.945 - 23.040: 99.3594% ( 1) 00:17:32.973 26.738 - 26.927: 99.3667% ( 1) 00:17:32.973 30.341 - 30.530: 99.3741% ( 1) 00:17:32.973 35.650 - 35.840: 99.3814% ( 1) 00:17:32.973 3203.982 - 3228.255: 99.3888% ( 1) 00:17:32.973 3980.705 - 4004.978: 99.7275% ( 46) 00:17:32.973 4004.978 - 4029.250: 100.0000% ( 37) 00:17:32.973 00:17:32.973 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:32.973 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:32.973 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:32.973 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:32.973 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:33.231 [ 00:17:33.231 { 00:17:33.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:33.231 "subtype": "Discovery", 00:17:33.231 "listen_addresses": [], 00:17:33.231 "allow_any_host": true, 00:17:33.231 "hosts": [] 00:17:33.231 }, 00:17:33.231 { 00:17:33.231 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:33.231 "subtype": "NVMe", 00:17:33.231 "listen_addresses": [ 00:17:33.231 { 00:17:33.231 "trtype": "VFIOUSER", 00:17:33.231 "adrfam": "IPv4", 00:17:33.231 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:33.231 "trsvcid": "0" 00:17:33.231 } 00:17:33.231 ], 00:17:33.231 "allow_any_host": true, 00:17:33.231 "hosts": [], 00:17:33.231 "serial_number": "SPDK1", 00:17:33.231 "model_number": "SPDK bdev Controller", 00:17:33.231 "max_namespaces": 32, 00:17:33.231 "min_cntlid": 1, 00:17:33.231 "max_cntlid": 65519, 00:17:33.231 "namespaces": [ 00:17:33.231 { 00:17:33.231 "nsid": 1, 00:17:33.231 "bdev_name": "Malloc1", 00:17:33.231 "name": "Malloc1", 00:17:33.231 "nguid": "8C91EA21333B4132881D368D9A09D1E9", 00:17:33.231 "uuid": "8c91ea21-333b-4132-881d-368d9a09d1e9" 00:17:33.231 }, 00:17:33.231 { 00:17:33.231 "nsid": 2, 00:17:33.231 "bdev_name": "Malloc3", 00:17:33.231 "name": "Malloc3", 00:17:33.231 "nguid": "525DD13833CF480EB3587995E64B18C2", 00:17:33.231 "uuid": "525dd138-33cf-480e-b358-7995e64b18c2" 00:17:33.231 } 00:17:33.231 ] 00:17:33.231 }, 00:17:33.231 { 00:17:33.231 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:33.231 "subtype": "NVMe", 00:17:33.231 "listen_addresses": [ 00:17:33.231 { 00:17:33.231 "trtype": "VFIOUSER", 00:17:33.231 "adrfam": "IPv4", 00:17:33.231 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:33.231 "trsvcid": "0" 00:17:33.231 } 00:17:33.231 ], 00:17:33.231 "allow_any_host": true, 00:17:33.231 "hosts": [], 00:17:33.231 "serial_number": "SPDK2", 00:17:33.231 "model_number": "SPDK bdev Controller", 00:17:33.231 "max_namespaces": 32, 00:17:33.231 "min_cntlid": 1, 00:17:33.231 "max_cntlid": 65519, 00:17:33.231 "namespaces": [ 00:17:33.231 { 00:17:33.231 "nsid": 1, 00:17:33.231 "bdev_name": "Malloc2", 00:17:33.231 "name": "Malloc2", 00:17:33.231 "nguid": "FB4EB22E62C74A91859F60885CC7D5A5", 00:17:33.231 "uuid": "fb4eb22e-62c7-4a91-859f-60885cc7d5a5" 00:17:33.231 } 00:17:33.231 ] 00:17:33.231 } 00:17:33.231 ] 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=827439 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:33.231 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:33.231 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.489 [2024-07-25 04:00:48.635483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:33.489 Malloc4 00:17:33.489 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:33.746 [2024-07-25 04:00:48.998078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:33.746 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:33.746 Asynchronous Event Request test 00:17:33.746 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.746 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.746 Registering asynchronous event callbacks... 00:17:33.746 Starting namespace attribute notice tests for all controllers... 00:17:33.746 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:33.746 aer_cb - Changed Namespace 00:17:33.746 Cleaning up... 00:17:34.002 [ 00:17:34.002 { 00:17:34.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:34.002 "subtype": "Discovery", 00:17:34.002 "listen_addresses": [], 00:17:34.002 "allow_any_host": true, 00:17:34.002 "hosts": [] 00:17:34.002 }, 00:17:34.002 { 00:17:34.002 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:34.002 "subtype": "NVMe", 00:17:34.002 "listen_addresses": [ 00:17:34.002 { 00:17:34.002 "trtype": "VFIOUSER", 00:17:34.002 "adrfam": "IPv4", 00:17:34.002 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:34.002 "trsvcid": "0" 00:17:34.002 } 00:17:34.002 ], 00:17:34.002 "allow_any_host": true, 00:17:34.002 "hosts": [], 00:17:34.002 "serial_number": "SPDK1", 00:17:34.002 "model_number": "SPDK bdev Controller", 00:17:34.002 "max_namespaces": 32, 00:17:34.002 "min_cntlid": 1, 00:17:34.002 "max_cntlid": 65519, 00:17:34.002 "namespaces": [ 00:17:34.002 { 00:17:34.002 "nsid": 1, 00:17:34.002 "bdev_name": "Malloc1", 00:17:34.002 "name": "Malloc1", 00:17:34.002 "nguid": "8C91EA21333B4132881D368D9A09D1E9", 00:17:34.002 "uuid": "8c91ea21-333b-4132-881d-368d9a09d1e9" 00:17:34.002 }, 00:17:34.002 { 00:17:34.002 "nsid": 2, 00:17:34.002 "bdev_name": "Malloc3", 00:17:34.002 "name": "Malloc3", 00:17:34.002 "nguid": "525DD13833CF480EB3587995E64B18C2", 00:17:34.002 "uuid": "525dd138-33cf-480e-b358-7995e64b18c2" 00:17:34.002 } 00:17:34.002 ] 00:17:34.002 }, 00:17:34.002 { 00:17:34.002 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:34.002 "subtype": "NVMe", 00:17:34.002 "listen_addresses": [ 00:17:34.002 { 00:17:34.002 "trtype": "VFIOUSER", 00:17:34.002 "adrfam": "IPv4", 00:17:34.002 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:34.002 "trsvcid": "0" 00:17:34.002 } 00:17:34.002 ], 00:17:34.002 "allow_any_host": true, 00:17:34.002 "hosts": [], 00:17:34.002 "serial_number": "SPDK2", 00:17:34.002 "model_number": "SPDK bdev Controller", 00:17:34.002 "max_namespaces": 32, 00:17:34.002 "min_cntlid": 1, 00:17:34.002 "max_cntlid": 65519, 00:17:34.002 "namespaces": [ 00:17:34.002 { 00:17:34.002 "nsid": 1, 00:17:34.002 "bdev_name": "Malloc2", 00:17:34.002 "name": "Malloc2", 00:17:34.002 "nguid": "FB4EB22E62C74A91859F60885CC7D5A5", 00:17:34.002 "uuid": "fb4eb22e-62c7-4a91-859f-60885cc7d5a5" 00:17:34.002 }, 00:17:34.002 { 00:17:34.002 "nsid": 2, 00:17:34.002 "bdev_name": "Malloc4", 00:17:34.002 "name": "Malloc4", 00:17:34.002 "nguid": "CF4EDADEC80C4951B62AA24C0B4C5460", 00:17:34.002 "uuid": "cf4edade-c80c-4951-b62a-a24c0b4c5460" 00:17:34.002 } 00:17:34.002 ] 00:17:34.002 } 00:17:34.002 ] 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 827439 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 821431 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 821431 ']' 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 821431 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821431 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821431' 00:17:34.003 killing process with pid 821431 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 821431 00:17:34.003 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 821431 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=827581 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 827581' 00:17:34.566 Process pid: 827581 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 827581 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 827581 ']' 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:34.566 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:34.566 [2024-07-25 04:00:49.662037] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:34.566 [2024-07-25 04:00:49.663084] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:17:34.566 [2024-07-25 04:00:49.663143] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.566 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.566 [2024-07-25 04:00:49.696350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:34.566 [2024-07-25 04:00:49.725651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.566 [2024-07-25 04:00:49.821126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.566 [2024-07-25 04:00:49.821175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.566 [2024-07-25 04:00:49.821204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.566 [2024-07-25 04:00:49.821216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.566 [2024-07-25 04:00:49.821236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.566 [2024-07-25 04:00:49.821360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.566 [2024-07-25 04:00:49.821433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.566 [2024-07-25 04:00:49.821485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.566 [2024-07-25 04:00:49.821482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.823 [2024-07-25 04:00:49.911884] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:34.823 [2024-07-25 04:00:49.912090] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:34.823 [2024-07-25 04:00:49.912364] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:34.823 [2024-07-25 04:00:49.912901] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:34.823 [2024-07-25 04:00:49.913143] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:34.823 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.823 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:34.823 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:35.753 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:36.010 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:36.010 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:36.010 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:36.010 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:36.010 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:36.268 Malloc1 00:17:36.268 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:36.526 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:36.783 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:37.040 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:37.040 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:37.040 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:37.297 Malloc2 00:17:37.297 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:37.553 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:37.810 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 827581 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 827581 ']' 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 827581 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 827581 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 827581' 00:17:38.067 killing process with pid 827581 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 827581 00:17:38.067 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 827581 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:38.324 00:17:38.324 real 0m52.297s 00:17:38.324 user 3m26.470s 00:17:38.324 sys 0m4.360s 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:38.324 ************************************ 00:17:38.324 END TEST nvmf_vfio_user 00:17:38.324 ************************************ 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.324 04:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.582 ************************************ 00:17:38.582 START TEST nvmf_vfio_user_nvme_compliance 00:17:38.582 ************************************ 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:38.582 * Looking for test storage... 00:17:38.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=828176 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 828176' 00:17:38.582 Process pid: 828176 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 828176 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 828176 ']' 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.582 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.583 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.583 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.583 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:38.583 [2024-07-25 04:00:53.753414] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:17:38.583 [2024-07-25 04:00:53.753491] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.583 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.583 [2024-07-25 04:00:53.784316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.583 [2024-07-25 04:00:53.811188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.841 [2024-07-25 04:00:53.897769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.841 [2024-07-25 04:00:53.897816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.841 [2024-07-25 04:00:53.897844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.841 [2024-07-25 04:00:53.897855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.841 [2024-07-25 04:00:53.897866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.841 [2024-07-25 04:00:53.897947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.841 [2024-07-25 04:00:53.898013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.841 [2024-07-25 04:00:53.898015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.841 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.841 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:38.841 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:39.773 malloc0 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:39.773 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.031 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:40.031 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.031 00:17:40.031 00:17:40.031 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.031 http://cunit.sourceforge.net/ 00:17:40.031 00:17:40.031 00:17:40.031 Suite: nvme_compliance 00:17:40.031 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 04:00:55.248790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.031 [2024-07-25 04:00:55.250205] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:40.031 [2024-07-25 04:00:55.250251] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:40.031 [2024-07-25 04:00:55.250266] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:40.031 [2024-07-25 04:00:55.251811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.031 passed 00:17:40.289 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 04:00:55.337413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.289 [2024-07-25 04:00:55.340435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.289 passed 00:17:40.289 Test: admin_identify_ns ...[2024-07-25 04:00:55.426886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.289 [2024-07-25 04:00:55.487277] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:40.289 [2024-07-25 04:00:55.495261] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:40.289 [2024-07-25 04:00:55.516373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.289 passed 00:17:40.546 Test: admin_get_features_mandatory_features ...[2024-07-25 04:00:55.600170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.546 [2024-07-25 04:00:55.603188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.546 passed 00:17:40.546 Test: admin_get_features_optional_features ...[2024-07-25 04:00:55.687778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.546 [2024-07-25 04:00:55.690802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.546 passed 00:17:40.546 Test: admin_set_features_number_of_queues ...[2024-07-25 04:00:55.773818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.804 [2024-07-25 04:00:55.878344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.804 passed 00:17:40.804 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 04:00:55.961914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.804 [2024-07-25 04:00:55.964934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.804 passed 00:17:40.804 Test: admin_get_log_page_with_lpo ...[2024-07-25 04:00:56.044770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.061 [2024-07-25 04:00:56.116260] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:41.061 [2024-07-25 04:00:56.129344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.061 passed 00:17:41.061 Test: fabric_property_get ...[2024-07-25 04:00:56.208921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.061 [2024-07-25 04:00:56.210182] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:41.061 [2024-07-25 04:00:56.211941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.061 passed 00:17:41.061 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 04:00:56.298550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.061 [2024-07-25 04:00:56.299854] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:41.061 [2024-07-25 04:00:56.301566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.061 passed 00:17:41.318 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 04:00:56.383177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.318 [2024-07-25 04:00:56.468257] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.319 [2024-07-25 04:00:56.484272] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.319 [2024-07-25 04:00:56.489344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.319 passed 00:17:41.319 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 04:00:56.574028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.319 [2024-07-25 04:00:56.575356] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:41.319 [2024-07-25 04:00:56.577053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.319 passed 00:17:41.576 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 04:00:56.660262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.576 [2024-07-25 04:00:56.737256] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:41.576 [2024-07-25 04:00:56.761264] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.576 [2024-07-25 04:00:56.766376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.576 passed 00:17:41.576 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 04:00:56.846979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.576 [2024-07-25 04:00:56.848329] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:41.576 [2024-07-25 04:00:56.848385] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:41.576 [2024-07-25 04:00:56.852010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.833 passed 00:17:41.833 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 04:00:56.935812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.833 [2024-07-25 04:00:57.027256] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:41.833 [2024-07-25 04:00:57.035265] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:41.833 [2024-07-25 04:00:57.043264] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:41.833 [2024-07-25 04:00:57.051267] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:41.834 [2024-07-25 04:00:57.080354] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.834 passed 00:17:42.091 Test: admin_create_io_sq_verify_pc ...[2024-07-25 04:00:57.163827] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.091 [2024-07-25 04:00:57.179263] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:42.091 [2024-07-25 04:00:57.197122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.091 passed 00:17:42.091 Test: admin_create_io_qp_max_qps ...[2024-07-25 04:00:57.281760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.461 [2024-07-25 04:00:58.388273] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:43.719 [2024-07-25 04:00:58.784576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.719 passed 00:17:43.719 Test: admin_create_io_sq_shared_cq ...[2024-07-25 04:00:58.866824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.719 [2024-07-25 04:00:58.998255] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:43.976 [2024-07-25 04:00:59.035345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.976 passed 00:17:43.976 00:17:43.976 Run Summary: Type Total Ran Passed Failed Inactive 00:17:43.977 suites 1 1 n/a 0 0 00:17:43.977 tests 18 18 18 0 0 00:17:43.977 asserts 360 360 360 0 n/a 00:17:43.977 00:17:43.977 Elapsed time = 1.572 seconds 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 828176 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 828176 ']' 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 828176 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 828176 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 828176' 00:17:43.977 killing process with pid 828176 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 828176 00:17:43.977 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 828176 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:44.235 00:17:44.235 real 0m5.715s 00:17:44.235 user 0m16.092s 00:17:44.235 sys 0m0.569s 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:44.235 ************************************ 00:17:44.235 END TEST nvmf_vfio_user_nvme_compliance 00:17:44.235 ************************************ 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.235 ************************************ 00:17:44.235 START TEST nvmf_vfio_user_fuzz 00:17:44.235 ************************************ 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:44.235 * Looking for test storage... 00:17:44.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.235 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=828898 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 828898' 00:17:44.236 Process pid: 828898 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 828898 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 828898 ']' 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.236 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:44.494 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.494 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:44.494 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.865 malloc0 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.865 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:45.866 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:17.952 Fuzzing completed. Shutting down the fuzz application 00:18:17.952 00:18:17.952 Dumping successful admin opcodes: 00:18:17.952 8, 9, 10, 24, 00:18:17.952 Dumping successful io opcodes: 00:18:17.952 0, 00:18:17.952 NS: 0x200003a1ef00 I/O qp, Total commands completed: 555019, total successful commands: 2134, random_seed: 140207872 00:18:17.952 NS: 0x200003a1ef00 admin qp, Total commands completed: 134016, total successful commands: 1085, random_seed: 2016560512 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 828898 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 828898 ']' 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 828898 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:17.952 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 828898 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 828898' 00:18:17.953 killing process with pid 828898 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 828898 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 828898 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:17.953 00:18:17.953 real 0m32.508s 00:18:17.953 user 0m32.182s 00:18:17.953 sys 0m28.118s 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.953 ************************************ 00:18:17.953 END TEST nvmf_vfio_user_fuzz 00:18:17.953 ************************************ 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.953 ************************************ 00:18:17.953 START TEST nvmf_auth_target 00:18:17.953 ************************************ 00:18:17.953 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:17.953 * Looking for test storage... 00:18:17.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.953 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.887 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.888 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:18:18.888 00:18:18.888 --- 10.0.0.2 ping statistics --- 00:18:18.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.888 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:18:18.888 00:18:18.888 --- 10.0.0.1 ping statistics --- 00:18:18.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.888 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=834329 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 834329 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 834329 ']' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.888 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=834355 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=18aea97123ba90f9cca7c8dc01c179785d9a84fa997fb079 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bDP 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 18aea97123ba90f9cca7c8dc01c179785d9a84fa997fb079 0 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 18aea97123ba90f9cca7c8dc01c179785d9a84fa997fb079 0 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=18aea97123ba90f9cca7c8dc01c179785d9a84fa997fb079 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:19.147 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bDP 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bDP 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.bDP 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=78a2d81b239e5480ea5de9285f6c335480b3504c2b21caae91a7e64d533085f5 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dkJ 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 78a2d81b239e5480ea5de9285f6c335480b3504c2b21caae91a7e64d533085f5 3 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 78a2d81b239e5480ea5de9285f6c335480b3504c2b21caae91a7e64d533085f5 3 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=78a2d81b239e5480ea5de9285f6c335480b3504c2b21caae91a7e64d533085f5 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:19.406 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dkJ 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dkJ 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.dkJ 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e992f68e5ba5c091716389a01891d9c 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Enj 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e992f68e5ba5c091716389a01891d9c 1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e992f68e5ba5c091716389a01891d9c 1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e992f68e5ba5c091716389a01891d9c 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Enj 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Enj 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Enj 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b86598db4e2b8ef9b2e5d4b80d275fdf97d737f822fbad1f 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JlB 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b86598db4e2b8ef9b2e5d4b80d275fdf97d737f822fbad1f 2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b86598db4e2b8ef9b2e5d4b80d275fdf97d737f822fbad1f 2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b86598db4e2b8ef9b2e5d4b80d275fdf97d737f822fbad1f 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JlB 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JlB 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JlB 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd16ee8e6a38b6dd28373d5c9b4d6686e44356851dd2f4e5 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4RY 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd16ee8e6a38b6dd28373d5c9b4d6686e44356851dd2f4e5 2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd16ee8e6a38b6dd28373d5c9b4d6686e44356851dd2f4e5 2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd16ee8e6a38b6dd28373d5c9b4d6686e44356851dd2f4e5 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4RY 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4RY 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.4RY 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77003f11ed11b314c8f8a29ee9adcc84 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UaH 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77003f11ed11b314c8f8a29ee9adcc84 1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77003f11ed11b314c8f8a29ee9adcc84 1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77003f11ed11b314c8f8a29ee9adcc84 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:19.407 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UaH 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UaH 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.UaH 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=06eb3f8505e4268c4511d2836d6c895bec58d28927e4ba186f748a612f3d3d45 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vfd 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 06eb3f8505e4268c4511d2836d6c895bec58d28927e4ba186f748a612f3d3d45 3 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 06eb3f8505e4268c4511d2836d6c895bec58d28927e4ba186f748a612f3d3d45 3 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=06eb3f8505e4268c4511d2836d6c895bec58d28927e4ba186f748a612f3d3d45 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vfd 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vfd 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Vfd 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 834329 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 834329 ']' 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.666 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 834355 /var/tmp/host.sock 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 834355 ']' 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:19.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.924 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bDP 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bDP 00:18:20.183 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bDP 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.dkJ ]] 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dkJ 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dkJ 00:18:20.441 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dkJ 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Enj 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Enj 00:18:20.699 04:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Enj 00:18:20.956 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JlB ]] 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JlB 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JlB 00:18:20.957 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JlB 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4RY 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4RY 00:18:21.214 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4RY 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.UaH ]] 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UaH 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UaH 00:18:21.472 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UaH 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vfd 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Vfd 00:18:21.729 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Vfd 00:18:21.986 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:21.986 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:21.986 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.986 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.986 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.987 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.244 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.502 00:18:22.502 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.502 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.502 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.760 { 00:18:22.760 "cntlid": 1, 00:18:22.760 "qid": 0, 00:18:22.760 "state": "enabled", 00:18:22.760 "thread": "nvmf_tgt_poll_group_000", 00:18:22.760 "listen_address": { 00:18:22.760 "trtype": "TCP", 00:18:22.760 "adrfam": "IPv4", 00:18:22.760 "traddr": "10.0.0.2", 00:18:22.760 "trsvcid": "4420" 00:18:22.760 }, 00:18:22.760 "peer_address": { 00:18:22.760 "trtype": "TCP", 00:18:22.760 "adrfam": "IPv4", 00:18:22.760 "traddr": "10.0.0.1", 00:18:22.760 "trsvcid": "45672" 00:18:22.760 }, 00:18:22.760 "auth": { 00:18:22.760 "state": "completed", 00:18:22.760 "digest": "sha256", 00:18:22.760 "dhgroup": "null" 00:18:22.760 } 00:18:22.760 } 00:18:22.760 ]' 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.760 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.760 04:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.760 04:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.760 04:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.017 04:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:18:23.950 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.950 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.950 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.950 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.208 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.208 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.208 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.466 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.724 00:18:24.724 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.724 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.724 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.982 { 00:18:24.982 "cntlid": 3, 00:18:24.982 "qid": 0, 00:18:24.982 "state": "enabled", 00:18:24.982 "thread": "nvmf_tgt_poll_group_000", 00:18:24.982 "listen_address": { 00:18:24.982 "trtype": "TCP", 00:18:24.982 "adrfam": "IPv4", 00:18:24.982 "traddr": "10.0.0.2", 00:18:24.982 "trsvcid": "4420" 00:18:24.982 }, 00:18:24.982 "peer_address": { 00:18:24.982 "trtype": "TCP", 00:18:24.982 "adrfam": "IPv4", 00:18:24.982 "traddr": "10.0.0.1", 00:18:24.982 "trsvcid": "60236" 00:18:24.982 }, 00:18:24.982 "auth": { 00:18:24.982 "state": "completed", 00:18:24.982 "digest": "sha256", 00:18:24.982 "dhgroup": "null" 00:18:24.982 } 00:18:24.982 } 00:18:24.982 ]' 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.982 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.240 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:18:26.172 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.430 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.687 04:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.943 00:18:26.943 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.943 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.943 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.200 { 00:18:27.200 "cntlid": 5, 00:18:27.200 "qid": 0, 00:18:27.200 "state": "enabled", 00:18:27.200 "thread": "nvmf_tgt_poll_group_000", 00:18:27.200 "listen_address": { 00:18:27.200 "trtype": "TCP", 00:18:27.200 "adrfam": "IPv4", 00:18:27.200 "traddr": "10.0.0.2", 00:18:27.200 "trsvcid": "4420" 00:18:27.200 }, 00:18:27.200 "peer_address": { 00:18:27.200 "trtype": "TCP", 00:18:27.200 "adrfam": "IPv4", 00:18:27.200 "traddr": "10.0.0.1", 00:18:27.200 "trsvcid": "60256" 00:18:27.200 }, 00:18:27.200 "auth": { 00:18:27.200 "state": "completed", 00:18:27.200 "digest": "sha256", 00:18:27.200 "dhgroup": "null" 00:18:27.200 } 00:18:27.200 } 00:18:27.200 ]' 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.200 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.457 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.388 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.644 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.208 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.208 { 00:18:29.208 "cntlid": 7, 00:18:29.208 "qid": 0, 00:18:29.208 "state": "enabled", 00:18:29.208 "thread": "nvmf_tgt_poll_group_000", 00:18:29.208 "listen_address": { 00:18:29.208 "trtype": "TCP", 00:18:29.208 "adrfam": "IPv4", 00:18:29.208 "traddr": "10.0.0.2", 00:18:29.208 "trsvcid": "4420" 00:18:29.208 }, 00:18:29.208 "peer_address": { 00:18:29.208 "trtype": "TCP", 00:18:29.208 "adrfam": "IPv4", 00:18:29.208 "traddr": "10.0.0.1", 00:18:29.208 "trsvcid": "60296" 00:18:29.208 }, 00:18:29.208 "auth": { 00:18:29.208 "state": "completed", 00:18:29.208 "digest": "sha256", 00:18:29.208 "dhgroup": "null" 00:18:29.208 } 00:18:29.208 } 00:18:29.208 ]' 00:18:29.208 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.465 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.722 04:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.652 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.909 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.166 00:18:31.166 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.166 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.166 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.423 { 00:18:31.423 "cntlid": 9, 00:18:31.423 "qid": 0, 00:18:31.423 "state": "enabled", 00:18:31.423 "thread": "nvmf_tgt_poll_group_000", 00:18:31.423 "listen_address": { 00:18:31.423 "trtype": "TCP", 00:18:31.423 "adrfam": "IPv4", 00:18:31.423 "traddr": "10.0.0.2", 00:18:31.423 "trsvcid": "4420" 00:18:31.423 }, 00:18:31.423 "peer_address": { 00:18:31.423 "trtype": "TCP", 00:18:31.423 "adrfam": "IPv4", 00:18:31.423 "traddr": "10.0.0.1", 00:18:31.423 "trsvcid": "60332" 00:18:31.423 }, 00:18:31.423 "auth": { 00:18:31.423 "state": "completed", 00:18:31.423 "digest": "sha256", 00:18:31.423 "dhgroup": "ffdhe2048" 00:18:31.423 } 00:18:31.423 } 00:18:31.423 ]' 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.423 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.680 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.680 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.680 04:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.938 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:18:32.871 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.871 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.871 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.872 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.872 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.872 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.872 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:32.872 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.130 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.388 00:18:33.388 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.388 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.388 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.674 { 00:18:33.674 "cntlid": 11, 00:18:33.674 "qid": 0, 00:18:33.674 "state": "enabled", 00:18:33.674 "thread": "nvmf_tgt_poll_group_000", 00:18:33.674 "listen_address": { 00:18:33.674 "trtype": "TCP", 00:18:33.674 "adrfam": "IPv4", 00:18:33.674 "traddr": "10.0.0.2", 00:18:33.674 "trsvcid": "4420" 00:18:33.674 }, 00:18:33.674 "peer_address": { 00:18:33.674 "trtype": "TCP", 00:18:33.674 "adrfam": "IPv4", 00:18:33.674 "traddr": "10.0.0.1", 00:18:33.674 "trsvcid": "60364" 00:18:33.674 }, 00:18:33.674 "auth": { 00:18:33.674 "state": "completed", 00:18:33.674 "digest": "sha256", 00:18:33.674 "dhgroup": "ffdhe2048" 00:18:33.674 } 00:18:33.674 } 00:18:33.674 ]' 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.674 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.932 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.932 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.932 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.932 04:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.866 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.125 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.691 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.691 { 00:18:35.691 "cntlid": 13, 00:18:35.691 "qid": 0, 00:18:35.691 "state": "enabled", 00:18:35.691 "thread": "nvmf_tgt_poll_group_000", 00:18:35.691 "listen_address": { 00:18:35.691 "trtype": "TCP", 00:18:35.691 "adrfam": "IPv4", 00:18:35.691 "traddr": "10.0.0.2", 00:18:35.691 "trsvcid": "4420" 00:18:35.691 }, 00:18:35.691 "peer_address": { 00:18:35.691 "trtype": "TCP", 00:18:35.691 "adrfam": "IPv4", 00:18:35.691 "traddr": "10.0.0.1", 00:18:35.691 "trsvcid": "58802" 00:18:35.691 }, 00:18:35.691 "auth": { 00:18:35.691 "state": "completed", 00:18:35.691 "digest": "sha256", 00:18:35.691 "dhgroup": "ffdhe2048" 00:18:35.691 } 00:18:35.691 } 00:18:35.691 ]' 00:18:35.691 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.949 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.206 04:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.140 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.398 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.656 00:18:37.656 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.656 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.657 04:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.915 { 00:18:37.915 "cntlid": 15, 00:18:37.915 "qid": 0, 00:18:37.915 "state": "enabled", 00:18:37.915 "thread": "nvmf_tgt_poll_group_000", 00:18:37.915 "listen_address": { 00:18:37.915 "trtype": "TCP", 00:18:37.915 "adrfam": "IPv4", 00:18:37.915 "traddr": "10.0.0.2", 00:18:37.915 "trsvcid": "4420" 00:18:37.915 }, 00:18:37.915 "peer_address": { 00:18:37.915 "trtype": "TCP", 00:18:37.915 "adrfam": "IPv4", 00:18:37.915 "traddr": "10.0.0.1", 00:18:37.915 "trsvcid": "58822" 00:18:37.915 }, 00:18:37.915 "auth": { 00:18:37.915 "state": "completed", 00:18:37.915 "digest": "sha256", 00:18:37.915 "dhgroup": "ffdhe2048" 00:18:37.915 } 00:18:37.915 } 00:18:37.915 ]' 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.915 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.174 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.174 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.174 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.431 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.365 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.647 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.904 00:18:39.904 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.904 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.904 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.162 { 00:18:40.162 "cntlid": 17, 00:18:40.162 "qid": 0, 00:18:40.162 "state": "enabled", 00:18:40.162 "thread": "nvmf_tgt_poll_group_000", 00:18:40.162 "listen_address": { 00:18:40.162 "trtype": "TCP", 00:18:40.162 "adrfam": "IPv4", 00:18:40.162 "traddr": "10.0.0.2", 00:18:40.162 "trsvcid": "4420" 00:18:40.162 }, 00:18:40.162 "peer_address": { 00:18:40.162 "trtype": "TCP", 00:18:40.162 "adrfam": "IPv4", 00:18:40.162 "traddr": "10.0.0.1", 00:18:40.162 "trsvcid": "58848" 00:18:40.162 }, 00:18:40.162 "auth": { 00:18:40.162 "state": "completed", 00:18:40.162 "digest": "sha256", 00:18:40.162 "dhgroup": "ffdhe3072" 00:18:40.162 } 00:18:40.162 } 00:18:40.162 ]' 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.162 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.420 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.353 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.611 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.176 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.176 { 00:18:42.176 "cntlid": 19, 00:18:42.176 "qid": 0, 00:18:42.176 "state": "enabled", 00:18:42.176 "thread": "nvmf_tgt_poll_group_000", 00:18:42.176 "listen_address": { 00:18:42.176 "trtype": "TCP", 00:18:42.176 "adrfam": "IPv4", 00:18:42.176 "traddr": "10.0.0.2", 00:18:42.176 "trsvcid": "4420" 00:18:42.176 }, 00:18:42.176 "peer_address": { 00:18:42.176 "trtype": "TCP", 00:18:42.176 "adrfam": "IPv4", 00:18:42.176 "traddr": "10.0.0.1", 00:18:42.176 "trsvcid": "58870" 00:18:42.176 }, 00:18:42.176 "auth": { 00:18:42.176 "state": "completed", 00:18:42.176 "digest": "sha256", 00:18:42.176 "dhgroup": "ffdhe3072" 00:18:42.176 } 00:18:42.176 } 00:18:42.176 ]' 00:18:42.176 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.434 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.692 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.624 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.882 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:43.882 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.882 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.139 00:18:44.139 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.139 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.139 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.397 { 00:18:44.397 "cntlid": 21, 00:18:44.397 "qid": 0, 00:18:44.397 "state": "enabled", 00:18:44.397 "thread": "nvmf_tgt_poll_group_000", 00:18:44.397 "listen_address": { 00:18:44.397 "trtype": "TCP", 00:18:44.397 "adrfam": "IPv4", 00:18:44.397 "traddr": "10.0.0.2", 00:18:44.397 "trsvcid": "4420" 00:18:44.397 }, 00:18:44.397 "peer_address": { 00:18:44.397 "trtype": "TCP", 00:18:44.397 "adrfam": "IPv4", 00:18:44.397 "traddr": "10.0.0.1", 00:18:44.397 "trsvcid": "58900" 00:18:44.397 }, 00:18:44.397 "auth": { 00:18:44.397 "state": "completed", 00:18:44.397 "digest": "sha256", 00:18:44.397 "dhgroup": "ffdhe3072" 00:18:44.397 } 00:18:44.397 } 00:18:44.397 ]' 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.397 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.655 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.655 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.655 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.912 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.845 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.102 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.103 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.360 00:18:46.360 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.360 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.360 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.618 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.618 { 00:18:46.618 "cntlid": 23, 00:18:46.618 "qid": 0, 00:18:46.618 "state": "enabled", 00:18:46.618 "thread": "nvmf_tgt_poll_group_000", 00:18:46.618 "listen_address": { 00:18:46.618 "trtype": "TCP", 00:18:46.618 "adrfam": "IPv4", 00:18:46.618 "traddr": "10.0.0.2", 00:18:46.618 "trsvcid": "4420" 00:18:46.618 }, 00:18:46.618 "peer_address": { 00:18:46.618 "trtype": "TCP", 00:18:46.618 "adrfam": "IPv4", 00:18:46.618 "traddr": "10.0.0.1", 00:18:46.618 "trsvcid": "59120" 00:18:46.618 }, 00:18:46.618 "auth": { 00:18:46.619 "state": "completed", 00:18:46.619 "digest": "sha256", 00:18:46.619 "dhgroup": "ffdhe3072" 00:18:46.619 } 00:18:46.619 } 00:18:46.619 ]' 00:18:46.619 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.619 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.619 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.619 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.619 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.876 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.876 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.876 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.134 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.066 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.322 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.578 00:18:48.579 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.579 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.579 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.835 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.835 { 00:18:48.835 "cntlid": 25, 00:18:48.835 "qid": 0, 00:18:48.835 "state": "enabled", 00:18:48.836 "thread": "nvmf_tgt_poll_group_000", 00:18:48.836 "listen_address": { 00:18:48.836 "trtype": "TCP", 00:18:48.836 "adrfam": "IPv4", 00:18:48.836 "traddr": "10.0.0.2", 00:18:48.836 "trsvcid": "4420" 00:18:48.836 }, 00:18:48.836 "peer_address": { 00:18:48.836 "trtype": "TCP", 00:18:48.836 "adrfam": "IPv4", 00:18:48.836 "traddr": "10.0.0.1", 00:18:48.836 "trsvcid": "59152" 00:18:48.836 }, 00:18:48.836 "auth": { 00:18:48.836 "state": "completed", 00:18:48.836 "digest": "sha256", 00:18:48.836 "dhgroup": "ffdhe4096" 00:18:48.836 } 00:18:48.836 } 00:18:48.836 ]' 00:18:48.836 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.836 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.836 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.092 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.092 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.092 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.092 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.092 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.348 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:18:50.279 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.280 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.537 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.794 00:18:50.794 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.794 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.794 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.059 { 00:18:51.059 "cntlid": 27, 00:18:51.059 "qid": 0, 00:18:51.059 "state": "enabled", 00:18:51.059 "thread": "nvmf_tgt_poll_group_000", 00:18:51.059 "listen_address": { 00:18:51.059 "trtype": "TCP", 00:18:51.059 "adrfam": "IPv4", 00:18:51.059 "traddr": "10.0.0.2", 00:18:51.059 "trsvcid": "4420" 00:18:51.059 }, 00:18:51.059 "peer_address": { 00:18:51.059 "trtype": "TCP", 00:18:51.059 "adrfam": "IPv4", 00:18:51.059 "traddr": "10.0.0.1", 00:18:51.059 "trsvcid": "59176" 00:18:51.059 }, 00:18:51.059 "auth": { 00:18:51.059 "state": "completed", 00:18:51.059 "digest": "sha256", 00:18:51.059 "dhgroup": "ffdhe4096" 00:18:51.059 } 00:18:51.059 } 00:18:51.059 ]' 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.059 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.338 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.338 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.338 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.338 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.709 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.966 00:18:53.223 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.223 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.223 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.482 { 00:18:53.482 "cntlid": 29, 00:18:53.482 "qid": 0, 00:18:53.482 "state": "enabled", 00:18:53.482 "thread": "nvmf_tgt_poll_group_000", 00:18:53.482 "listen_address": { 00:18:53.482 "trtype": "TCP", 00:18:53.482 "adrfam": "IPv4", 00:18:53.482 "traddr": "10.0.0.2", 00:18:53.482 "trsvcid": "4420" 00:18:53.482 }, 00:18:53.482 "peer_address": { 00:18:53.482 "trtype": "TCP", 00:18:53.482 "adrfam": "IPv4", 00:18:53.482 "traddr": "10.0.0.1", 00:18:53.482 "trsvcid": "59206" 00:18:53.482 }, 00:18:53.482 "auth": { 00:18:53.482 "state": "completed", 00:18:53.482 "digest": "sha256", 00:18:53.482 "dhgroup": "ffdhe4096" 00:18:53.482 } 00:18:53.482 } 00:18:53.482 ]' 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.482 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.739 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.673 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.931 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:54.932 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.932 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.932 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.932 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.932 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.497 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.497 { 00:18:55.497 "cntlid": 31, 00:18:55.497 "qid": 0, 00:18:55.497 "state": "enabled", 00:18:55.497 "thread": "nvmf_tgt_poll_group_000", 00:18:55.497 "listen_address": { 00:18:55.497 "trtype": "TCP", 00:18:55.497 "adrfam": "IPv4", 00:18:55.497 "traddr": "10.0.0.2", 00:18:55.497 "trsvcid": "4420" 00:18:55.497 }, 00:18:55.497 "peer_address": { 00:18:55.497 "trtype": "TCP", 00:18:55.497 "adrfam": "IPv4", 00:18:55.497 "traddr": "10.0.0.1", 00:18:55.497 "trsvcid": "55142" 00:18:55.497 }, 00:18:55.497 "auth": { 00:18:55.497 "state": "completed", 00:18:55.497 "digest": "sha256", 00:18:55.497 "dhgroup": "ffdhe4096" 00:18:55.497 } 00:18:55.497 } 00:18:55.497 ]' 00:18:55.497 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.755 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.013 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.945 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.203 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.768 00:18:57.768 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.768 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.768 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.026 { 00:18:58.026 "cntlid": 33, 00:18:58.026 "qid": 0, 00:18:58.026 "state": "enabled", 00:18:58.026 "thread": "nvmf_tgt_poll_group_000", 00:18:58.026 "listen_address": { 00:18:58.026 "trtype": "TCP", 00:18:58.026 "adrfam": "IPv4", 00:18:58.026 "traddr": "10.0.0.2", 00:18:58.026 "trsvcid": "4420" 00:18:58.026 }, 00:18:58.026 "peer_address": { 00:18:58.026 "trtype": "TCP", 00:18:58.026 "adrfam": "IPv4", 00:18:58.026 "traddr": "10.0.0.1", 00:18:58.026 "trsvcid": "55176" 00:18:58.026 }, 00:18:58.026 "auth": { 00:18:58.026 "state": "completed", 00:18:58.026 "digest": "sha256", 00:18:58.026 "dhgroup": "ffdhe6144" 00:18:58.026 } 00:18:58.026 } 00:18:58.026 ]' 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.026 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.282 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.226 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.483 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.047 00:19:00.047 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.047 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.047 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.305 { 00:19:00.305 "cntlid": 35, 00:19:00.305 "qid": 0, 00:19:00.305 "state": "enabled", 00:19:00.305 "thread": "nvmf_tgt_poll_group_000", 00:19:00.305 "listen_address": { 00:19:00.305 "trtype": "TCP", 00:19:00.305 "adrfam": "IPv4", 00:19:00.305 "traddr": "10.0.0.2", 00:19:00.305 "trsvcid": "4420" 00:19:00.305 }, 00:19:00.305 "peer_address": { 00:19:00.305 "trtype": "TCP", 00:19:00.305 "adrfam": "IPv4", 00:19:00.305 "traddr": "10.0.0.1", 00:19:00.305 "trsvcid": "55206" 00:19:00.305 }, 00:19:00.305 "auth": { 00:19:00.305 "state": "completed", 00:19:00.305 "digest": "sha256", 00:19:00.305 "dhgroup": "ffdhe6144" 00:19:00.305 } 00:19:00.305 } 00:19:00.305 ]' 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.305 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.563 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.563 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.563 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.821 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.753 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.011 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.575 00:19:02.575 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.575 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.575 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.832 { 00:19:02.832 "cntlid": 37, 00:19:02.832 "qid": 0, 00:19:02.832 "state": "enabled", 00:19:02.832 "thread": "nvmf_tgt_poll_group_000", 00:19:02.832 "listen_address": { 00:19:02.832 "trtype": "TCP", 00:19:02.832 "adrfam": "IPv4", 00:19:02.832 "traddr": "10.0.0.2", 00:19:02.832 "trsvcid": "4420" 00:19:02.832 }, 00:19:02.832 "peer_address": { 00:19:02.832 "trtype": "TCP", 00:19:02.832 "adrfam": "IPv4", 00:19:02.832 "traddr": "10.0.0.1", 00:19:02.832 "trsvcid": "55214" 00:19:02.832 }, 00:19:02.832 "auth": { 00:19:02.832 "state": "completed", 00:19:02.832 "digest": "sha256", 00:19:02.832 "dhgroup": "ffdhe6144" 00:19:02.832 } 00:19:02.832 } 00:19:02.832 ]' 00:19:02.832 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.832 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.396 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.328 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.891 00:19:04.891 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.891 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.891 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.149 { 00:19:05.149 "cntlid": 39, 00:19:05.149 "qid": 0, 00:19:05.149 "state": "enabled", 00:19:05.149 "thread": "nvmf_tgt_poll_group_000", 00:19:05.149 "listen_address": { 00:19:05.149 "trtype": "TCP", 00:19:05.149 "adrfam": "IPv4", 00:19:05.149 "traddr": "10.0.0.2", 00:19:05.149 "trsvcid": "4420" 00:19:05.149 }, 00:19:05.149 "peer_address": { 00:19:05.149 "trtype": "TCP", 00:19:05.149 "adrfam": "IPv4", 00:19:05.149 "traddr": "10.0.0.1", 00:19:05.149 "trsvcid": "40548" 00:19:05.149 }, 00:19:05.149 "auth": { 00:19:05.149 "state": "completed", 00:19:05.149 "digest": "sha256", 00:19:05.149 "dhgroup": "ffdhe6144" 00:19:05.149 } 00:19:05.149 } 00:19:05.149 ]' 00:19:05.149 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.407 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.408 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.665 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.599 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.858 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.791 00:19:07.791 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.791 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.791 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.049 { 00:19:08.049 "cntlid": 41, 00:19:08.049 "qid": 0, 00:19:08.049 "state": "enabled", 00:19:08.049 "thread": "nvmf_tgt_poll_group_000", 00:19:08.049 "listen_address": { 00:19:08.049 "trtype": "TCP", 00:19:08.049 "adrfam": "IPv4", 00:19:08.049 "traddr": "10.0.0.2", 00:19:08.049 "trsvcid": "4420" 00:19:08.049 }, 00:19:08.049 "peer_address": { 00:19:08.049 "trtype": "TCP", 00:19:08.049 "adrfam": "IPv4", 00:19:08.049 "traddr": "10.0.0.1", 00:19:08.049 "trsvcid": "40580" 00:19:08.049 }, 00:19:08.049 "auth": { 00:19:08.049 "state": "completed", 00:19:08.049 "digest": "sha256", 00:19:08.049 "dhgroup": "ffdhe8192" 00:19:08.049 } 00:19:08.049 } 00:19:08.049 ]' 00:19:08.049 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.307 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.307 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.308 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.308 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.308 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.308 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.308 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.566 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.526 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.782 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.714 00:19:10.714 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.714 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.714 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.971 { 00:19:10.971 "cntlid": 43, 00:19:10.971 "qid": 0, 00:19:10.971 "state": "enabled", 00:19:10.971 "thread": "nvmf_tgt_poll_group_000", 00:19:10.971 "listen_address": { 00:19:10.971 "trtype": "TCP", 00:19:10.971 "adrfam": "IPv4", 00:19:10.971 "traddr": "10.0.0.2", 00:19:10.971 "trsvcid": "4420" 00:19:10.971 }, 00:19:10.971 "peer_address": { 00:19:10.971 "trtype": "TCP", 00:19:10.971 "adrfam": "IPv4", 00:19:10.971 "traddr": "10.0.0.1", 00:19:10.971 "trsvcid": "40614" 00:19:10.971 }, 00:19:10.971 "auth": { 00:19:10.971 "state": "completed", 00:19:10.971 "digest": "sha256", 00:19:10.971 "dhgroup": "ffdhe8192" 00:19:10.971 } 00:19:10.971 } 00:19:10.971 ]' 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.971 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.972 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.972 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.972 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.972 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.972 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.229 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.163 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.421 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.353 00:19:13.354 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.354 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.354 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.611 { 00:19:13.611 "cntlid": 45, 00:19:13.611 "qid": 0, 00:19:13.611 "state": "enabled", 00:19:13.611 "thread": "nvmf_tgt_poll_group_000", 00:19:13.611 "listen_address": { 00:19:13.611 "trtype": "TCP", 00:19:13.611 "adrfam": "IPv4", 00:19:13.611 "traddr": "10.0.0.2", 00:19:13.611 "trsvcid": "4420" 00:19:13.611 }, 00:19:13.611 "peer_address": { 00:19:13.611 "trtype": "TCP", 00:19:13.611 "adrfam": "IPv4", 00:19:13.611 "traddr": "10.0.0.1", 00:19:13.611 "trsvcid": "40646" 00:19:13.611 }, 00:19:13.611 "auth": { 00:19:13.611 "state": "completed", 00:19:13.611 "digest": "sha256", 00:19:13.611 "dhgroup": "ffdhe8192" 00:19:13.611 } 00:19:13.611 } 00:19:13.611 ]' 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.611 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.869 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.869 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.869 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.126 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:15.057 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.057 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.058 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.315 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.247 00:19:16.247 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.247 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.247 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.504 { 00:19:16.504 "cntlid": 47, 00:19:16.504 "qid": 0, 00:19:16.504 "state": "enabled", 00:19:16.504 "thread": "nvmf_tgt_poll_group_000", 00:19:16.504 "listen_address": { 00:19:16.504 "trtype": "TCP", 00:19:16.504 "adrfam": "IPv4", 00:19:16.504 "traddr": "10.0.0.2", 00:19:16.504 "trsvcid": "4420" 00:19:16.504 }, 00:19:16.504 "peer_address": { 00:19:16.504 "trtype": "TCP", 00:19:16.504 "adrfam": "IPv4", 00:19:16.504 "traddr": "10.0.0.1", 00:19:16.504 "trsvcid": "42490" 00:19:16.504 }, 00:19:16.504 "auth": { 00:19:16.504 "state": "completed", 00:19:16.504 "digest": "sha256", 00:19:16.504 "dhgroup": "ffdhe8192" 00:19:16.504 } 00:19:16.504 } 00:19:16.504 ]' 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.504 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.761 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:17.692 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.950 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.207 00:19:18.207 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.207 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.207 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.463 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.463 { 00:19:18.463 "cntlid": 49, 00:19:18.463 "qid": 0, 00:19:18.463 "state": "enabled", 00:19:18.463 "thread": "nvmf_tgt_poll_group_000", 00:19:18.463 "listen_address": { 00:19:18.463 "trtype": "TCP", 00:19:18.463 "adrfam": "IPv4", 00:19:18.463 "traddr": "10.0.0.2", 00:19:18.463 "trsvcid": "4420" 00:19:18.463 }, 00:19:18.464 "peer_address": { 00:19:18.464 "trtype": "TCP", 00:19:18.464 "adrfam": "IPv4", 00:19:18.464 "traddr": "10.0.0.1", 00:19:18.464 "trsvcid": "42532" 00:19:18.464 }, 00:19:18.464 "auth": { 00:19:18.464 "state": "completed", 00:19:18.464 "digest": "sha384", 00:19:18.464 "dhgroup": "null" 00:19:18.464 } 00:19:18.464 } 00:19:18.464 ]' 00:19:18.464 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.720 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.977 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:19.911 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.169 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.427 00:19:20.685 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.685 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.685 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.943 { 00:19:20.943 "cntlid": 51, 00:19:20.943 "qid": 0, 00:19:20.943 "state": "enabled", 00:19:20.943 "thread": "nvmf_tgt_poll_group_000", 00:19:20.943 "listen_address": { 00:19:20.943 "trtype": "TCP", 00:19:20.943 "adrfam": "IPv4", 00:19:20.943 "traddr": "10.0.0.2", 00:19:20.943 "trsvcid": "4420" 00:19:20.943 }, 00:19:20.943 "peer_address": { 00:19:20.943 "trtype": "TCP", 00:19:20.943 "adrfam": "IPv4", 00:19:20.943 "traddr": "10.0.0.1", 00:19:20.943 "trsvcid": "42562" 00:19:20.943 }, 00:19:20.943 "auth": { 00:19:20.943 "state": "completed", 00:19:20.943 "digest": "sha384", 00:19:20.943 "dhgroup": "null" 00:19:20.943 } 00:19:20.943 } 00:19:20.943 ]' 00:19:20.943 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.943 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.202 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:22.135 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.393 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.650 00:19:22.650 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.650 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.650 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.907 { 00:19:22.907 "cntlid": 53, 00:19:22.907 "qid": 0, 00:19:22.907 "state": "enabled", 00:19:22.907 "thread": "nvmf_tgt_poll_group_000", 00:19:22.907 "listen_address": { 00:19:22.907 "trtype": "TCP", 00:19:22.907 "adrfam": "IPv4", 00:19:22.907 "traddr": "10.0.0.2", 00:19:22.907 "trsvcid": "4420" 00:19:22.907 }, 00:19:22.907 "peer_address": { 00:19:22.907 "trtype": "TCP", 00:19:22.907 "adrfam": "IPv4", 00:19:22.907 "traddr": "10.0.0.1", 00:19:22.907 "trsvcid": "42582" 00:19:22.907 }, 00:19:22.907 "auth": { 00:19:22.907 "state": "completed", 00:19:22.907 "digest": "sha384", 00:19:22.907 "dhgroup": "null" 00:19:22.907 } 00:19:22.907 } 00:19:22.907 ]' 00:19:22.907 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.165 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.423 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.355 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.356 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:24.356 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.613 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.614 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.179 00:19:25.179 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.179 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.179 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.436 { 00:19:25.436 "cntlid": 55, 00:19:25.436 "qid": 0, 00:19:25.436 "state": "enabled", 00:19:25.436 "thread": "nvmf_tgt_poll_group_000", 00:19:25.436 "listen_address": { 00:19:25.436 "trtype": "TCP", 00:19:25.436 "adrfam": "IPv4", 00:19:25.436 "traddr": "10.0.0.2", 00:19:25.436 "trsvcid": "4420" 00:19:25.436 }, 00:19:25.436 "peer_address": { 00:19:25.436 "trtype": "TCP", 00:19:25.436 "adrfam": "IPv4", 00:19:25.436 "traddr": "10.0.0.1", 00:19:25.436 "trsvcid": "34162" 00:19:25.436 }, 00:19:25.436 "auth": { 00:19:25.436 "state": "completed", 00:19:25.436 "digest": "sha384", 00:19:25.436 "dhgroup": "null" 00:19:25.436 } 00:19:25.436 } 00:19:25.436 ]' 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.436 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.694 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.628 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.889 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.890 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.147 00:19:27.147 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.147 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.148 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.406 { 00:19:27.406 "cntlid": 57, 00:19:27.406 "qid": 0, 00:19:27.406 "state": "enabled", 00:19:27.406 "thread": "nvmf_tgt_poll_group_000", 00:19:27.406 "listen_address": { 00:19:27.406 "trtype": "TCP", 00:19:27.406 "adrfam": "IPv4", 00:19:27.406 "traddr": "10.0.0.2", 00:19:27.406 "trsvcid": "4420" 00:19:27.406 }, 00:19:27.406 "peer_address": { 00:19:27.406 "trtype": "TCP", 00:19:27.406 "adrfam": "IPv4", 00:19:27.406 "traddr": "10.0.0.1", 00:19:27.406 "trsvcid": "34184" 00:19:27.406 }, 00:19:27.406 "auth": { 00:19:27.406 "state": "completed", 00:19:27.406 "digest": "sha384", 00:19:27.406 "dhgroup": "ffdhe2048" 00:19:27.406 } 00:19:27.406 } 00:19:27.406 ]' 00:19:27.406 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.664 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.922 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.852 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.853 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.853 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.110 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.368 00:19:29.368 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.368 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.368 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.626 { 00:19:29.626 "cntlid": 59, 00:19:29.626 "qid": 0, 00:19:29.626 "state": "enabled", 00:19:29.626 "thread": "nvmf_tgt_poll_group_000", 00:19:29.626 "listen_address": { 00:19:29.626 "trtype": "TCP", 00:19:29.626 "adrfam": "IPv4", 00:19:29.626 "traddr": "10.0.0.2", 00:19:29.626 "trsvcid": "4420" 00:19:29.626 }, 00:19:29.626 "peer_address": { 00:19:29.626 "trtype": "TCP", 00:19:29.626 "adrfam": "IPv4", 00:19:29.626 "traddr": "10.0.0.1", 00:19:29.626 "trsvcid": "34218" 00:19:29.626 }, 00:19:29.626 "auth": { 00:19:29.626 "state": "completed", 00:19:29.626 "digest": "sha384", 00:19:29.626 "dhgroup": "ffdhe2048" 00:19:29.626 } 00:19:29.626 } 00:19:29.626 ]' 00:19:29.626 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.884 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.884 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.884 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.884 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.884 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.884 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.884 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.141 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.074 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.332 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.898 00:19:31.898 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.898 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.898 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.898 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.898 { 00:19:31.898 "cntlid": 61, 00:19:31.898 "qid": 0, 00:19:31.898 "state": "enabled", 00:19:31.898 "thread": "nvmf_tgt_poll_group_000", 00:19:31.898 "listen_address": { 00:19:31.898 "trtype": "TCP", 00:19:31.898 "adrfam": "IPv4", 00:19:31.898 "traddr": "10.0.0.2", 00:19:31.898 "trsvcid": "4420" 00:19:31.898 }, 00:19:31.898 "peer_address": { 00:19:31.898 "trtype": "TCP", 00:19:31.898 "adrfam": "IPv4", 00:19:31.898 "traddr": "10.0.0.1", 00:19:31.898 "trsvcid": "34254" 00:19:31.898 }, 00:19:31.898 "auth": { 00:19:31.898 "state": "completed", 00:19:31.898 "digest": "sha384", 00:19:31.898 "dhgroup": "ffdhe2048" 00:19:31.898 } 00:19:31.898 } 00:19:31.898 ]' 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.156 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.414 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.347 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.604 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.862 00:19:33.862 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.862 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.862 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.120 { 00:19:34.120 "cntlid": 63, 00:19:34.120 "qid": 0, 00:19:34.120 "state": "enabled", 00:19:34.120 "thread": "nvmf_tgt_poll_group_000", 00:19:34.120 "listen_address": { 00:19:34.120 "trtype": "TCP", 00:19:34.120 "adrfam": "IPv4", 00:19:34.120 "traddr": "10.0.0.2", 00:19:34.120 "trsvcid": "4420" 00:19:34.120 }, 00:19:34.120 "peer_address": { 00:19:34.120 "trtype": "TCP", 00:19:34.120 "adrfam": "IPv4", 00:19:34.120 "traddr": "10.0.0.1", 00:19:34.120 "trsvcid": "34278" 00:19:34.120 }, 00:19:34.120 "auth": { 00:19:34.120 "state": "completed", 00:19:34.120 "digest": "sha384", 00:19:34.120 "dhgroup": "ffdhe2048" 00:19:34.120 } 00:19:34.120 } 00:19:34.120 ]' 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.120 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.378 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.378 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.378 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.378 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.378 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.636 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.570 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.827 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.085 00:19:36.085 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.085 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.085 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.343 { 00:19:36.343 "cntlid": 65, 00:19:36.343 "qid": 0, 00:19:36.343 "state": "enabled", 00:19:36.343 "thread": "nvmf_tgt_poll_group_000", 00:19:36.343 "listen_address": { 00:19:36.343 "trtype": "TCP", 00:19:36.343 "adrfam": "IPv4", 00:19:36.343 "traddr": "10.0.0.2", 00:19:36.343 "trsvcid": "4420" 00:19:36.343 }, 00:19:36.343 "peer_address": { 00:19:36.343 "trtype": "TCP", 00:19:36.343 "adrfam": "IPv4", 00:19:36.343 "traddr": "10.0.0.1", 00:19:36.343 "trsvcid": "56038" 00:19:36.343 }, 00:19:36.343 "auth": { 00:19:36.343 "state": "completed", 00:19:36.343 "digest": "sha384", 00:19:36.343 "dhgroup": "ffdhe3072" 00:19:36.343 } 00:19:36.343 } 00:19:36.343 ]' 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.343 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.600 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.600 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.600 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.600 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.600 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.858 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.790 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.048 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.306 00:19:38.306 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.306 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.306 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.563 { 00:19:38.563 "cntlid": 67, 00:19:38.563 "qid": 0, 00:19:38.563 "state": "enabled", 00:19:38.563 "thread": "nvmf_tgt_poll_group_000", 00:19:38.563 "listen_address": { 00:19:38.563 "trtype": "TCP", 00:19:38.563 "adrfam": "IPv4", 00:19:38.563 "traddr": "10.0.0.2", 00:19:38.563 "trsvcid": "4420" 00:19:38.563 }, 00:19:38.563 "peer_address": { 00:19:38.563 "trtype": "TCP", 00:19:38.563 "adrfam": "IPv4", 00:19:38.563 "traddr": "10.0.0.1", 00:19:38.563 "trsvcid": "56072" 00:19:38.563 }, 00:19:38.563 "auth": { 00:19:38.563 "state": "completed", 00:19:38.563 "digest": "sha384", 00:19:38.563 "dhgroup": "ffdhe3072" 00:19:38.563 } 00:19:38.563 } 00:19:38.563 ]' 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.563 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.820 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.820 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.820 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.820 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.820 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.077 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.008 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.265 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.523 00:19:40.523 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.523 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.523 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.781 { 00:19:40.781 "cntlid": 69, 00:19:40.781 "qid": 0, 00:19:40.781 "state": "enabled", 00:19:40.781 "thread": "nvmf_tgt_poll_group_000", 00:19:40.781 "listen_address": { 00:19:40.781 "trtype": "TCP", 00:19:40.781 "adrfam": "IPv4", 00:19:40.781 "traddr": "10.0.0.2", 00:19:40.781 "trsvcid": "4420" 00:19:40.781 }, 00:19:40.781 "peer_address": { 00:19:40.781 "trtype": "TCP", 00:19:40.781 "adrfam": "IPv4", 00:19:40.781 "traddr": "10.0.0.1", 00:19:40.781 "trsvcid": "56098" 00:19:40.781 }, 00:19:40.781 "auth": { 00:19:40.781 "state": "completed", 00:19:40.781 "digest": "sha384", 00:19:40.781 "dhgroup": "ffdhe3072" 00:19:40.781 } 00:19:40.781 } 00:19:40.781 ]' 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.781 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.038 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.038 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.038 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.038 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.038 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.295 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.230 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.530 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.788 00:19:42.788 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.788 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.788 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.045 { 00:19:43.045 "cntlid": 71, 00:19:43.045 "qid": 0, 00:19:43.045 "state": "enabled", 00:19:43.045 "thread": "nvmf_tgt_poll_group_000", 00:19:43.045 "listen_address": { 00:19:43.045 "trtype": "TCP", 00:19:43.045 "adrfam": "IPv4", 00:19:43.045 "traddr": "10.0.0.2", 00:19:43.045 "trsvcid": "4420" 00:19:43.045 }, 00:19:43.045 "peer_address": { 00:19:43.045 "trtype": "TCP", 00:19:43.045 "adrfam": "IPv4", 00:19:43.045 "traddr": "10.0.0.1", 00:19:43.045 "trsvcid": "56136" 00:19:43.045 }, 00:19:43.045 "auth": { 00:19:43.045 "state": "completed", 00:19:43.045 "digest": "sha384", 00:19:43.045 "dhgroup": "ffdhe3072" 00:19:43.045 } 00:19:43.045 } 00:19:43.045 ]' 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.045 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.561 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.495 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.753 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.319 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.319 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.319 { 00:19:45.319 "cntlid": 73, 00:19:45.319 "qid": 0, 00:19:45.319 "state": "enabled", 00:19:45.319 "thread": "nvmf_tgt_poll_group_000", 00:19:45.319 "listen_address": { 00:19:45.319 "trtype": "TCP", 00:19:45.319 "adrfam": "IPv4", 00:19:45.319 "traddr": "10.0.0.2", 00:19:45.319 "trsvcid": "4420" 00:19:45.319 }, 00:19:45.319 "peer_address": { 00:19:45.319 "trtype": "TCP", 00:19:45.319 "adrfam": "IPv4", 00:19:45.319 "traddr": "10.0.0.1", 00:19:45.319 "trsvcid": "40420" 00:19:45.319 }, 00:19:45.319 "auth": { 00:19:45.319 "state": "completed", 00:19:45.319 "digest": "sha384", 00:19:45.320 "dhgroup": "ffdhe4096" 00:19:45.320 } 00:19:45.320 } 00:19:45.320 ]' 00:19:45.320 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.577 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.835 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.767 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.025 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.283 00:19:47.283 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.283 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.283 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.541 { 00:19:47.541 "cntlid": 75, 00:19:47.541 "qid": 0, 00:19:47.541 "state": "enabled", 00:19:47.541 "thread": "nvmf_tgt_poll_group_000", 00:19:47.541 "listen_address": { 00:19:47.541 "trtype": "TCP", 00:19:47.541 "adrfam": "IPv4", 00:19:47.541 "traddr": "10.0.0.2", 00:19:47.541 "trsvcid": "4420" 00:19:47.541 }, 00:19:47.541 "peer_address": { 00:19:47.541 "trtype": "TCP", 00:19:47.541 "adrfam": "IPv4", 00:19:47.541 "traddr": "10.0.0.1", 00:19:47.541 "trsvcid": "40454" 00:19:47.541 }, 00:19:47.541 "auth": { 00:19:47.541 "state": "completed", 00:19:47.541 "digest": "sha384", 00:19:47.541 "dhgroup": "ffdhe4096" 00:19:47.541 } 00:19:47.541 } 00:19:47.541 ]' 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.541 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.798 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.799 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.799 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.799 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.799 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.056 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:48.998 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.255 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.513 00:19:49.513 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.513 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.513 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.770 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.770 { 00:19:49.770 "cntlid": 77, 00:19:49.770 "qid": 0, 00:19:49.770 "state": "enabled", 00:19:49.770 "thread": "nvmf_tgt_poll_group_000", 00:19:49.770 "listen_address": { 00:19:49.770 "trtype": "TCP", 00:19:49.770 "adrfam": "IPv4", 00:19:49.770 "traddr": "10.0.0.2", 00:19:49.770 "trsvcid": "4420" 00:19:49.770 }, 00:19:49.770 "peer_address": { 00:19:49.770 "trtype": "TCP", 00:19:49.770 "adrfam": "IPv4", 00:19:49.770 "traddr": "10.0.0.1", 00:19:49.770 "trsvcid": "40492" 00:19:49.770 }, 00:19:49.770 "auth": { 00:19:49.770 "state": "completed", 00:19:49.770 "digest": "sha384", 00:19:49.770 "dhgroup": "ffdhe4096" 00:19:49.770 } 00:19:49.770 } 00:19:49.770 ]' 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.028 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.284 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.217 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.783 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.040 00:19:52.040 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.040 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.040 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.298 { 00:19:52.298 "cntlid": 79, 00:19:52.298 "qid": 0, 00:19:52.298 "state": "enabled", 00:19:52.298 "thread": "nvmf_tgt_poll_group_000", 00:19:52.298 "listen_address": { 00:19:52.298 "trtype": "TCP", 00:19:52.298 "adrfam": "IPv4", 00:19:52.298 "traddr": "10.0.0.2", 00:19:52.298 "trsvcid": "4420" 00:19:52.298 }, 00:19:52.298 "peer_address": { 00:19:52.298 "trtype": "TCP", 00:19:52.298 "adrfam": "IPv4", 00:19:52.298 "traddr": "10.0.0.1", 00:19:52.298 "trsvcid": "40518" 00:19:52.298 }, 00:19:52.298 "auth": { 00:19:52.298 "state": "completed", 00:19:52.298 "digest": "sha384", 00:19:52.298 "dhgroup": "ffdhe4096" 00:19:52.298 } 00:19:52.298 } 00:19:52.298 ]' 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.298 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.555 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.555 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.555 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.813 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.744 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.001 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.566 00:19:54.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.824 { 00:19:54.824 "cntlid": 81, 00:19:54.824 "qid": 0, 00:19:54.824 "state": "enabled", 00:19:54.824 "thread": "nvmf_tgt_poll_group_000", 00:19:54.824 "listen_address": { 00:19:54.824 "trtype": "TCP", 00:19:54.824 "adrfam": "IPv4", 00:19:54.824 "traddr": "10.0.0.2", 00:19:54.824 "trsvcid": "4420" 00:19:54.824 }, 00:19:54.824 "peer_address": { 00:19:54.824 "trtype": "TCP", 00:19:54.824 "adrfam": "IPv4", 00:19:54.824 "traddr": "10.0.0.1", 00:19:54.824 "trsvcid": "38734" 00:19:54.824 }, 00:19:54.824 "auth": { 00:19:54.824 "state": "completed", 00:19:54.824 "digest": "sha384", 00:19:54.824 "dhgroup": "ffdhe6144" 00:19:54.824 } 00:19:54.824 } 00:19:54.824 ]' 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.081 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.012 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.270 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.835 00:19:56.835 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.835 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.835 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.092 { 00:19:57.092 "cntlid": 83, 00:19:57.092 "qid": 0, 00:19:57.092 "state": "enabled", 00:19:57.092 "thread": "nvmf_tgt_poll_group_000", 00:19:57.092 "listen_address": { 00:19:57.092 "trtype": "TCP", 00:19:57.092 "adrfam": "IPv4", 00:19:57.092 "traddr": "10.0.0.2", 00:19:57.092 "trsvcid": "4420" 00:19:57.092 }, 00:19:57.092 "peer_address": { 00:19:57.092 "trtype": "TCP", 00:19:57.092 "adrfam": "IPv4", 00:19:57.092 "traddr": "10.0.0.1", 00:19:57.092 "trsvcid": "38766" 00:19:57.092 }, 00:19:57.092 "auth": { 00:19:57.092 "state": "completed", 00:19:57.092 "digest": "sha384", 00:19:57.092 "dhgroup": "ffdhe6144" 00:19:57.092 } 00:19:57.092 } 00:19:57.092 ]' 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.092 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.349 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.349 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.349 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.349 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.349 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.607 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.537 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.795 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.386 00:19:59.386 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.386 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.386 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.642 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.642 { 00:19:59.642 "cntlid": 85, 00:19:59.642 "qid": 0, 00:19:59.642 "state": "enabled", 00:19:59.642 "thread": "nvmf_tgt_poll_group_000", 00:19:59.642 "listen_address": { 00:19:59.642 "trtype": "TCP", 00:19:59.642 "adrfam": "IPv4", 00:19:59.642 "traddr": "10.0.0.2", 00:19:59.642 "trsvcid": "4420" 00:19:59.642 }, 00:19:59.642 "peer_address": { 00:19:59.642 "trtype": "TCP", 00:19:59.642 "adrfam": "IPv4", 00:19:59.643 "traddr": "10.0.0.1", 00:19:59.643 "trsvcid": "38792" 00:19:59.643 }, 00:19:59.643 "auth": { 00:19:59.643 "state": "completed", 00:19:59.643 "digest": "sha384", 00:19:59.643 "dhgroup": "ffdhe6144" 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ]' 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.643 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.899 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.831 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.088 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.088 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.346 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.911 00:20:01.911 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.911 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.911 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.911 { 00:20:01.911 "cntlid": 87, 00:20:01.911 "qid": 0, 00:20:01.911 "state": "enabled", 00:20:01.911 "thread": "nvmf_tgt_poll_group_000", 00:20:01.911 "listen_address": { 00:20:01.911 "trtype": "TCP", 00:20:01.911 "adrfam": "IPv4", 00:20:01.911 "traddr": "10.0.0.2", 00:20:01.911 "trsvcid": "4420" 00:20:01.911 }, 00:20:01.911 "peer_address": { 00:20:01.911 "trtype": "TCP", 00:20:01.911 "adrfam": "IPv4", 00:20:01.911 "traddr": "10.0.0.1", 00:20:01.911 "trsvcid": "38826" 00:20:01.911 }, 00:20:01.911 "auth": { 00:20:01.911 "state": "completed", 00:20:01.911 "digest": "sha384", 00:20:01.911 "dhgroup": "ffdhe6144" 00:20:01.911 } 00:20:01.911 } 00:20:01.911 ]' 00:20:01.911 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.168 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.425 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.357 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.614 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.546 00:20:04.546 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.546 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.546 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.804 { 00:20:04.804 "cntlid": 89, 00:20:04.804 "qid": 0, 00:20:04.804 "state": "enabled", 00:20:04.804 "thread": "nvmf_tgt_poll_group_000", 00:20:04.804 "listen_address": { 00:20:04.804 "trtype": "TCP", 00:20:04.804 "adrfam": "IPv4", 00:20:04.804 "traddr": "10.0.0.2", 00:20:04.804 "trsvcid": "4420" 00:20:04.804 }, 00:20:04.804 "peer_address": { 00:20:04.804 "trtype": "TCP", 00:20:04.804 "adrfam": "IPv4", 00:20:04.804 "traddr": "10.0.0.1", 00:20:04.804 "trsvcid": "38870" 00:20:04.804 }, 00:20:04.804 "auth": { 00:20:04.804 "state": "completed", 00:20:04.804 "digest": "sha384", 00:20:04.804 "dhgroup": "ffdhe8192" 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ]' 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.804 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.804 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.804 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.804 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.804 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.804 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.062 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:05.994 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.252 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.510 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.443 00:20:07.443 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.443 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.443 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.701 { 00:20:07.701 "cntlid": 91, 00:20:07.701 "qid": 0, 00:20:07.701 "state": "enabled", 00:20:07.701 "thread": "nvmf_tgt_poll_group_000", 00:20:07.701 "listen_address": { 00:20:07.701 "trtype": "TCP", 00:20:07.701 "adrfam": "IPv4", 00:20:07.701 "traddr": "10.0.0.2", 00:20:07.701 "trsvcid": "4420" 00:20:07.701 }, 00:20:07.701 "peer_address": { 00:20:07.701 "trtype": "TCP", 00:20:07.701 "adrfam": "IPv4", 00:20:07.701 "traddr": "10.0.0.1", 00:20:07.701 "trsvcid": "39440" 00:20:07.701 }, 00:20:07.701 "auth": { 00:20:07.701 "state": "completed", 00:20:07.701 "digest": "sha384", 00:20:07.701 "dhgroup": "ffdhe8192" 00:20:07.701 } 00:20:07.701 } 00:20:07.701 ]' 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.701 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.958 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.890 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.148 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.081 00:20:10.081 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.081 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.081 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.339 { 00:20:10.339 "cntlid": 93, 00:20:10.339 "qid": 0, 00:20:10.339 "state": "enabled", 00:20:10.339 "thread": "nvmf_tgt_poll_group_000", 00:20:10.339 "listen_address": { 00:20:10.339 "trtype": "TCP", 00:20:10.339 "adrfam": "IPv4", 00:20:10.339 "traddr": "10.0.0.2", 00:20:10.339 "trsvcid": "4420" 00:20:10.339 }, 00:20:10.339 "peer_address": { 00:20:10.339 "trtype": "TCP", 00:20:10.339 "adrfam": "IPv4", 00:20:10.339 "traddr": "10.0.0.1", 00:20:10.339 "trsvcid": "39468" 00:20:10.339 }, 00:20:10.339 "auth": { 00:20:10.339 "state": "completed", 00:20:10.339 "digest": "sha384", 00:20:10.339 "dhgroup": "ffdhe8192" 00:20:10.339 } 00:20:10.339 } 00:20:10.339 ]' 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.339 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.597 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:11.531 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.531 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.531 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.531 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.789 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.789 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.789 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.789 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.046 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:12.046 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.046 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.047 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.979 00:20:12.979 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.979 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.979 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.237 { 00:20:13.237 "cntlid": 95, 00:20:13.237 "qid": 0, 00:20:13.237 "state": "enabled", 00:20:13.237 "thread": "nvmf_tgt_poll_group_000", 00:20:13.237 "listen_address": { 00:20:13.237 "trtype": "TCP", 00:20:13.237 "adrfam": "IPv4", 00:20:13.237 "traddr": "10.0.0.2", 00:20:13.237 "trsvcid": "4420" 00:20:13.237 }, 00:20:13.237 "peer_address": { 00:20:13.237 "trtype": "TCP", 00:20:13.237 "adrfam": "IPv4", 00:20:13.237 "traddr": "10.0.0.1", 00:20:13.237 "trsvcid": "39494" 00:20:13.237 }, 00:20:13.237 "auth": { 00:20:13.237 "state": "completed", 00:20:13.237 "digest": "sha384", 00:20:13.237 "dhgroup": "ffdhe8192" 00:20:13.237 } 00:20:13.237 } 00:20:13.237 ]' 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.237 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.495 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.427 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.685 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.686 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.686 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.251 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.251 { 00:20:15.251 "cntlid": 97, 00:20:15.251 "qid": 0, 00:20:15.251 "state": "enabled", 00:20:15.251 "thread": "nvmf_tgt_poll_group_000", 00:20:15.251 "listen_address": { 00:20:15.251 "trtype": "TCP", 00:20:15.251 "adrfam": "IPv4", 00:20:15.251 "traddr": "10.0.0.2", 00:20:15.251 "trsvcid": "4420" 00:20:15.251 }, 00:20:15.251 "peer_address": { 00:20:15.251 "trtype": "TCP", 00:20:15.251 "adrfam": "IPv4", 00:20:15.251 "traddr": "10.0.0.1", 00:20:15.251 "trsvcid": "59924" 00:20:15.251 }, 00:20:15.251 "auth": { 00:20:15.251 "state": "completed", 00:20:15.251 "digest": "sha512", 00:20:15.251 "dhgroup": "null" 00:20:15.251 } 00:20:15.251 } 00:20:15.251 ]' 00:20:15.251 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.509 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.798 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:16.730 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.731 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.988 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.246 00:20:17.246 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.246 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.246 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.503 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.503 { 00:20:17.504 "cntlid": 99, 00:20:17.504 "qid": 0, 00:20:17.504 "state": "enabled", 00:20:17.504 "thread": "nvmf_tgt_poll_group_000", 00:20:17.504 "listen_address": { 00:20:17.504 "trtype": "TCP", 00:20:17.504 "adrfam": "IPv4", 00:20:17.504 "traddr": "10.0.0.2", 00:20:17.504 "trsvcid": "4420" 00:20:17.504 }, 00:20:17.504 "peer_address": { 00:20:17.504 "trtype": "TCP", 00:20:17.504 "adrfam": "IPv4", 00:20:17.504 "traddr": "10.0.0.1", 00:20:17.504 "trsvcid": "59952" 00:20:17.504 }, 00:20:17.504 "auth": { 00:20:17.504 "state": "completed", 00:20:17.504 "digest": "sha512", 00:20:17.504 "dhgroup": "null" 00:20:17.504 } 00:20:17.504 } 00:20:17.504 ]' 00:20:17.504 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.504 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.504 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.504 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.504 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.761 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.761 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.761 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.019 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.952 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.209 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.466 00:20:19.466 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.466 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.466 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.724 { 00:20:19.724 "cntlid": 101, 00:20:19.724 "qid": 0, 00:20:19.724 "state": "enabled", 00:20:19.724 "thread": "nvmf_tgt_poll_group_000", 00:20:19.724 "listen_address": { 00:20:19.724 "trtype": "TCP", 00:20:19.724 "adrfam": "IPv4", 00:20:19.724 "traddr": "10.0.0.2", 00:20:19.724 "trsvcid": "4420" 00:20:19.724 }, 00:20:19.724 "peer_address": { 00:20:19.724 "trtype": "TCP", 00:20:19.724 "adrfam": "IPv4", 00:20:19.724 "traddr": "10.0.0.1", 00:20:19.724 "trsvcid": "59984" 00:20:19.724 }, 00:20:19.724 "auth": { 00:20:19.724 "state": "completed", 00:20:19.724 "digest": "sha512", 00:20:19.724 "dhgroup": "null" 00:20:19.724 } 00:20:19.724 } 00:20:19.724 ]' 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.724 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.982 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.914 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.172 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.429 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.687 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.946 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.946 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.946 { 00:20:21.946 "cntlid": 103, 00:20:21.946 "qid": 0, 00:20:21.946 "state": "enabled", 00:20:21.946 "thread": "nvmf_tgt_poll_group_000", 00:20:21.946 "listen_address": { 00:20:21.946 "trtype": "TCP", 00:20:21.946 "adrfam": "IPv4", 00:20:21.946 "traddr": "10.0.0.2", 00:20:21.946 "trsvcid": "4420" 00:20:21.946 }, 00:20:21.946 "peer_address": { 00:20:21.946 "trtype": "TCP", 00:20:21.946 "adrfam": "IPv4", 00:20:21.946 "traddr": "10.0.0.1", 00:20:21.946 "trsvcid": "60010" 00:20:21.946 }, 00:20:21.946 "auth": { 00:20:21.946 "state": "completed", 00:20:21.946 "digest": "sha512", 00:20:21.946 "dhgroup": "null" 00:20:21.946 } 00:20:21.946 } 00:20:21.946 ]' 00:20:21.946 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.946 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.204 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.137 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.395 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.651 00:20:23.651 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.651 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.651 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.908 { 00:20:23.908 "cntlid": 105, 00:20:23.908 "qid": 0, 00:20:23.908 "state": "enabled", 00:20:23.908 "thread": "nvmf_tgt_poll_group_000", 00:20:23.908 "listen_address": { 00:20:23.908 "trtype": "TCP", 00:20:23.908 "adrfam": "IPv4", 00:20:23.908 "traddr": "10.0.0.2", 00:20:23.908 "trsvcid": "4420" 00:20:23.908 }, 00:20:23.908 "peer_address": { 00:20:23.908 "trtype": "TCP", 00:20:23.908 "adrfam": "IPv4", 00:20:23.908 "traddr": "10.0.0.1", 00:20:23.908 "trsvcid": "60024" 00:20:23.908 }, 00:20:23.908 "auth": { 00:20:23.908 "state": "completed", 00:20:23.908 "digest": "sha512", 00:20:23.908 "dhgroup": "ffdhe2048" 00:20:23.908 } 00:20:23.908 } 00:20:23.908 ]' 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.908 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.165 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.166 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.166 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.166 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.166 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.423 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:25.355 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.612 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.869 00:20:25.869 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.869 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.869 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.126 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.126 { 00:20:26.126 "cntlid": 107, 00:20:26.126 "qid": 0, 00:20:26.126 "state": "enabled", 00:20:26.126 "thread": "nvmf_tgt_poll_group_000", 00:20:26.127 "listen_address": { 00:20:26.127 "trtype": "TCP", 00:20:26.127 "adrfam": "IPv4", 00:20:26.127 "traddr": "10.0.0.2", 00:20:26.127 "trsvcid": "4420" 00:20:26.127 }, 00:20:26.127 "peer_address": { 00:20:26.127 "trtype": "TCP", 00:20:26.127 "adrfam": "IPv4", 00:20:26.127 "traddr": "10.0.0.1", 00:20:26.127 "trsvcid": "56610" 00:20:26.127 }, 00:20:26.127 "auth": { 00:20:26.127 "state": "completed", 00:20:26.127 "digest": "sha512", 00:20:26.127 "dhgroup": "ffdhe2048" 00:20:26.127 } 00:20:26.127 } 00:20:26.127 ]' 00:20:26.127 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.127 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.127 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.383 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.383 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.383 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.383 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.383 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.640 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:27.570 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.571 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.828 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.085 00:20:28.085 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.085 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.085 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.343 { 00:20:28.343 "cntlid": 109, 00:20:28.343 "qid": 0, 00:20:28.343 "state": "enabled", 00:20:28.343 "thread": "nvmf_tgt_poll_group_000", 00:20:28.343 "listen_address": { 00:20:28.343 "trtype": "TCP", 00:20:28.343 "adrfam": "IPv4", 00:20:28.343 "traddr": "10.0.0.2", 00:20:28.343 "trsvcid": "4420" 00:20:28.343 }, 00:20:28.343 "peer_address": { 00:20:28.343 "trtype": "TCP", 00:20:28.343 "adrfam": "IPv4", 00:20:28.343 "traddr": "10.0.0.1", 00:20:28.343 "trsvcid": "56638" 00:20:28.343 }, 00:20:28.343 "auth": { 00:20:28.343 "state": "completed", 00:20:28.343 "digest": "sha512", 00:20:28.343 "dhgroup": "ffdhe2048" 00:20:28.343 } 00:20:28.343 } 00:20:28.343 ]' 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.343 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.600 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.532 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.790 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.355 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.355 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.613 { 00:20:30.613 "cntlid": 111, 00:20:30.613 "qid": 0, 00:20:30.613 "state": "enabled", 00:20:30.613 "thread": "nvmf_tgt_poll_group_000", 00:20:30.613 "listen_address": { 00:20:30.613 "trtype": "TCP", 00:20:30.613 "adrfam": "IPv4", 00:20:30.613 "traddr": "10.0.0.2", 00:20:30.613 "trsvcid": "4420" 00:20:30.613 }, 00:20:30.613 "peer_address": { 00:20:30.613 "trtype": "TCP", 00:20:30.613 "adrfam": "IPv4", 00:20:30.613 "traddr": "10.0.0.1", 00:20:30.613 "trsvcid": "56660" 00:20:30.613 }, 00:20:30.613 "auth": { 00:20:30.613 "state": "completed", 00:20:30.613 "digest": "sha512", 00:20:30.613 "dhgroup": "ffdhe2048" 00:20:30.613 } 00:20:30.613 } 00:20:30.613 ]' 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.613 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.870 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.816 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.097 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.662 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.662 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.663 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.663 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.663 { 00:20:32.663 "cntlid": 113, 00:20:32.663 "qid": 0, 00:20:32.663 "state": "enabled", 00:20:32.663 "thread": "nvmf_tgt_poll_group_000", 00:20:32.663 "listen_address": { 00:20:32.663 "trtype": "TCP", 00:20:32.663 "adrfam": "IPv4", 00:20:32.663 "traddr": "10.0.0.2", 00:20:32.663 "trsvcid": "4420" 00:20:32.663 }, 00:20:32.663 "peer_address": { 00:20:32.663 "trtype": "TCP", 00:20:32.663 "adrfam": "IPv4", 00:20:32.663 "traddr": "10.0.0.1", 00:20:32.663 "trsvcid": "56674" 00:20:32.663 }, 00:20:32.663 "auth": { 00:20:32.663 "state": "completed", 00:20:32.663 "digest": "sha512", 00:20:32.663 "dhgroup": "ffdhe3072" 00:20:32.663 } 00:20:32.663 } 00:20:32.663 ]' 00:20:32.663 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.920 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.920 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.920 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.920 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.920 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.920 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.920 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.182 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.119 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.376 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.633 00:20:34.633 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.633 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.633 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.891 { 00:20:34.891 "cntlid": 115, 00:20:34.891 "qid": 0, 00:20:34.891 "state": "enabled", 00:20:34.891 "thread": "nvmf_tgt_poll_group_000", 00:20:34.891 "listen_address": { 00:20:34.891 "trtype": "TCP", 00:20:34.891 "adrfam": "IPv4", 00:20:34.891 "traddr": "10.0.0.2", 00:20:34.891 "trsvcid": "4420" 00:20:34.891 }, 00:20:34.891 "peer_address": { 00:20:34.891 "trtype": "TCP", 00:20:34.891 "adrfam": "IPv4", 00:20:34.891 "traddr": "10.0.0.1", 00:20:34.891 "trsvcid": "56396" 00:20:34.891 }, 00:20:34.891 "auth": { 00:20:34.891 "state": "completed", 00:20:34.891 "digest": "sha512", 00:20:34.891 "dhgroup": "ffdhe3072" 00:20:34.891 } 00:20:34.891 } 00:20:34.891 ]' 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.891 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.149 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.149 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.149 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.149 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.149 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.407 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:36.340 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.341 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.598 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.856 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.114 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.371 { 00:20:37.371 "cntlid": 117, 00:20:37.371 "qid": 0, 00:20:37.371 "state": "enabled", 00:20:37.371 "thread": "nvmf_tgt_poll_group_000", 00:20:37.371 "listen_address": { 00:20:37.371 "trtype": "TCP", 00:20:37.371 "adrfam": "IPv4", 00:20:37.371 "traddr": "10.0.0.2", 00:20:37.371 "trsvcid": "4420" 00:20:37.371 }, 00:20:37.371 "peer_address": { 00:20:37.371 "trtype": "TCP", 00:20:37.371 "adrfam": "IPv4", 00:20:37.371 "traddr": "10.0.0.1", 00:20:37.371 "trsvcid": "56428" 00:20:37.371 }, 00:20:37.371 "auth": { 00:20:37.371 "state": "completed", 00:20:37.371 "digest": "sha512", 00:20:37.371 "dhgroup": "ffdhe3072" 00:20:37.371 } 00:20:37.371 } 00:20:37.371 ]' 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.371 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.629 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.562 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.820 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.385 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.385 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.385 { 00:20:39.385 "cntlid": 119, 00:20:39.385 "qid": 0, 00:20:39.385 "state": "enabled", 00:20:39.385 "thread": "nvmf_tgt_poll_group_000", 00:20:39.385 "listen_address": { 00:20:39.385 "trtype": "TCP", 00:20:39.385 "adrfam": "IPv4", 00:20:39.385 "traddr": "10.0.0.2", 00:20:39.385 "trsvcid": "4420" 00:20:39.385 }, 00:20:39.385 "peer_address": { 00:20:39.385 "trtype": "TCP", 00:20:39.385 "adrfam": "IPv4", 00:20:39.385 "traddr": "10.0.0.1", 00:20:39.385 "trsvcid": "56438" 00:20:39.386 }, 00:20:39.386 "auth": { 00:20:39.386 "state": "completed", 00:20:39.386 "digest": "sha512", 00:20:39.386 "dhgroup": "ffdhe3072" 00:20:39.386 } 00:20:39.386 } 00:20:39.386 ]' 00:20:39.386 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.644 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.901 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:40.834 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.092 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.350 00:20:41.608 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.608 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.608 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.865 { 00:20:41.865 "cntlid": 121, 00:20:41.865 "qid": 0, 00:20:41.865 "state": "enabled", 00:20:41.865 "thread": "nvmf_tgt_poll_group_000", 00:20:41.865 "listen_address": { 00:20:41.865 "trtype": "TCP", 00:20:41.865 "adrfam": "IPv4", 00:20:41.865 "traddr": "10.0.0.2", 00:20:41.865 "trsvcid": "4420" 00:20:41.865 }, 00:20:41.865 "peer_address": { 00:20:41.865 "trtype": "TCP", 00:20:41.865 "adrfam": "IPv4", 00:20:41.865 "traddr": "10.0.0.1", 00:20:41.865 "trsvcid": "56460" 00:20:41.865 }, 00:20:41.865 "auth": { 00:20:41.865 "state": "completed", 00:20:41.865 "digest": "sha512", 00:20:41.865 "dhgroup": "ffdhe4096" 00:20:41.865 } 00:20:41.865 } 00:20:41.865 ]' 00:20:41.865 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.866 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.866 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.866 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.866 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.866 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.866 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.866 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.123 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.056 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.313 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.570 00:20:43.828 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.829 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.829 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.087 { 00:20:44.087 "cntlid": 123, 00:20:44.087 "qid": 0, 00:20:44.087 "state": "enabled", 00:20:44.087 "thread": "nvmf_tgt_poll_group_000", 00:20:44.087 "listen_address": { 00:20:44.087 "trtype": "TCP", 00:20:44.087 "adrfam": "IPv4", 00:20:44.087 "traddr": "10.0.0.2", 00:20:44.087 "trsvcid": "4420" 00:20:44.087 }, 00:20:44.087 "peer_address": { 00:20:44.087 "trtype": "TCP", 00:20:44.087 "adrfam": "IPv4", 00:20:44.087 "traddr": "10.0.0.1", 00:20:44.087 "trsvcid": "56470" 00:20:44.087 }, 00:20:44.087 "auth": { 00:20:44.087 "state": "completed", 00:20:44.087 "digest": "sha512", 00:20:44.087 "dhgroup": "ffdhe4096" 00:20:44.087 } 00:20:44.087 } 00:20:44.087 ]' 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.087 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.344 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.276 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.099 00:20:46.099 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.099 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.099 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.357 { 00:20:46.357 "cntlid": 125, 00:20:46.357 "qid": 0, 00:20:46.357 "state": "enabled", 00:20:46.357 "thread": "nvmf_tgt_poll_group_000", 00:20:46.357 "listen_address": { 00:20:46.357 "trtype": "TCP", 00:20:46.357 "adrfam": "IPv4", 00:20:46.357 "traddr": "10.0.0.2", 00:20:46.357 "trsvcid": "4420" 00:20:46.357 }, 00:20:46.357 "peer_address": { 00:20:46.357 "trtype": "TCP", 00:20:46.357 "adrfam": "IPv4", 00:20:46.357 "traddr": "10.0.0.1", 00:20:46.357 "trsvcid": "52122" 00:20:46.357 }, 00:20:46.357 "auth": { 00:20:46.357 "state": "completed", 00:20:46.357 "digest": "sha512", 00:20:46.357 "dhgroup": "ffdhe4096" 00:20:46.357 } 00:20:46.357 } 00:20:46.357 ]' 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.357 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.615 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.546 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.803 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.404 00:20:48.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.662 { 00:20:48.662 "cntlid": 127, 00:20:48.662 "qid": 0, 00:20:48.662 "state": "enabled", 00:20:48.662 "thread": "nvmf_tgt_poll_group_000", 00:20:48.662 "listen_address": { 00:20:48.662 "trtype": "TCP", 00:20:48.662 "adrfam": "IPv4", 00:20:48.662 "traddr": "10.0.0.2", 00:20:48.662 "trsvcid": "4420" 00:20:48.662 }, 00:20:48.662 "peer_address": { 00:20:48.662 "trtype": "TCP", 00:20:48.662 "adrfam": "IPv4", 00:20:48.662 "traddr": "10.0.0.1", 00:20:48.662 "trsvcid": "52152" 00:20:48.662 }, 00:20:48.662 "auth": { 00:20:48.662 "state": "completed", 00:20:48.662 "digest": "sha512", 00:20:48.662 "dhgroup": "ffdhe4096" 00:20:48.662 } 00:20:48.662 } 00:20:48.662 ]' 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.662 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.663 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.920 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.852 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.110 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.674 00:20:50.674 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.674 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.674 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.932 { 00:20:50.932 "cntlid": 129, 00:20:50.932 "qid": 0, 00:20:50.932 "state": "enabled", 00:20:50.932 "thread": "nvmf_tgt_poll_group_000", 00:20:50.932 "listen_address": { 00:20:50.932 "trtype": "TCP", 00:20:50.932 "adrfam": "IPv4", 00:20:50.932 "traddr": "10.0.0.2", 00:20:50.932 "trsvcid": "4420" 00:20:50.932 }, 00:20:50.932 "peer_address": { 00:20:50.932 "trtype": "TCP", 00:20:50.932 "adrfam": "IPv4", 00:20:50.932 "traddr": "10.0.0.1", 00:20:50.932 "trsvcid": "52178" 00:20:50.932 }, 00:20:50.932 "auth": { 00:20:50.932 "state": "completed", 00:20:50.932 "digest": "sha512", 00:20:50.932 "dhgroup": "ffdhe6144" 00:20:50.932 } 00:20:50.932 } 00:20:50.932 ]' 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.932 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.189 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.189 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.189 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.189 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.189 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.446 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:20:52.380 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.380 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.380 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.381 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.381 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.381 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.381 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.381 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.640 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.205 00:20:53.205 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.205 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.205 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.463 { 00:20:53.463 "cntlid": 131, 00:20:53.463 "qid": 0, 00:20:53.463 "state": "enabled", 00:20:53.463 "thread": "nvmf_tgt_poll_group_000", 00:20:53.463 "listen_address": { 00:20:53.463 "trtype": "TCP", 00:20:53.463 "adrfam": "IPv4", 00:20:53.463 "traddr": "10.0.0.2", 00:20:53.463 "trsvcid": "4420" 00:20:53.463 }, 00:20:53.463 "peer_address": { 00:20:53.463 "trtype": "TCP", 00:20:53.463 "adrfam": "IPv4", 00:20:53.463 "traddr": "10.0.0.1", 00:20:53.463 "trsvcid": "52200" 00:20:53.463 }, 00:20:53.463 "auth": { 00:20:53.463 "state": "completed", 00:20:53.463 "digest": "sha512", 00:20:53.463 "dhgroup": "ffdhe6144" 00:20:53.463 } 00:20:53.463 } 00:20:53.463 ]' 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.463 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.720 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.720 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.720 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.720 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.721 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.978 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:20:54.911 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.911 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.168 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.732 00:20:55.732 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.732 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.732 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.990 { 00:20:55.990 "cntlid": 133, 00:20:55.990 "qid": 0, 00:20:55.990 "state": "enabled", 00:20:55.990 "thread": "nvmf_tgt_poll_group_000", 00:20:55.990 "listen_address": { 00:20:55.990 "trtype": "TCP", 00:20:55.990 "adrfam": "IPv4", 00:20:55.990 "traddr": "10.0.0.2", 00:20:55.990 "trsvcid": "4420" 00:20:55.990 }, 00:20:55.990 "peer_address": { 00:20:55.990 "trtype": "TCP", 00:20:55.990 "adrfam": "IPv4", 00:20:55.990 "traddr": "10.0.0.1", 00:20:55.990 "trsvcid": "46900" 00:20:55.990 }, 00:20:55.990 "auth": { 00:20:55.990 "state": "completed", 00:20:55.990 "digest": "sha512", 00:20:55.990 "dhgroup": "ffdhe6144" 00:20:55.990 } 00:20:55.990 } 00:20:55.990 ]' 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.990 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.248 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.181 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.439 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.004 00:20:58.004 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.004 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.004 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.271 { 00:20:58.271 "cntlid": 135, 00:20:58.271 "qid": 0, 00:20:58.271 "state": "enabled", 00:20:58.271 "thread": "nvmf_tgt_poll_group_000", 00:20:58.271 "listen_address": { 00:20:58.271 "trtype": "TCP", 00:20:58.271 "adrfam": "IPv4", 00:20:58.271 "traddr": "10.0.0.2", 00:20:58.271 "trsvcid": "4420" 00:20:58.271 }, 00:20:58.271 "peer_address": { 00:20:58.271 "trtype": "TCP", 00:20:58.271 "adrfam": "IPv4", 00:20:58.271 "traddr": "10.0.0.1", 00:20:58.271 "trsvcid": "46922" 00:20:58.271 }, 00:20:58.271 "auth": { 00:20:58.271 "state": "completed", 00:20:58.271 "digest": "sha512", 00:20:58.271 "dhgroup": "ffdhe6144" 00:20:58.271 } 00:20:58.271 } 00:20:58.271 ]' 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.271 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.528 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.528 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.528 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.786 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:20:59.717 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.717 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.717 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.717 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.717 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.718 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.718 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.718 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.718 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.975 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.907 00:21:00.907 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.907 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.907 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.165 { 00:21:01.165 "cntlid": 137, 00:21:01.165 "qid": 0, 00:21:01.165 "state": "enabled", 00:21:01.165 "thread": "nvmf_tgt_poll_group_000", 00:21:01.165 "listen_address": { 00:21:01.165 "trtype": "TCP", 00:21:01.165 "adrfam": "IPv4", 00:21:01.165 "traddr": "10.0.0.2", 00:21:01.165 "trsvcid": "4420" 00:21:01.165 }, 00:21:01.165 "peer_address": { 00:21:01.165 "trtype": "TCP", 00:21:01.165 "adrfam": "IPv4", 00:21:01.165 "traddr": "10.0.0.1", 00:21:01.165 "trsvcid": "46964" 00:21:01.165 }, 00:21:01.165 "auth": { 00:21:01.165 "state": "completed", 00:21:01.165 "digest": "sha512", 00:21:01.165 "dhgroup": "ffdhe8192" 00:21:01.165 } 00:21:01.165 } 00:21:01.165 ]' 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.165 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.422 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.355 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.613 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.546 00:21:03.546 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.546 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.546 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.804 { 00:21:03.804 "cntlid": 139, 00:21:03.804 "qid": 0, 00:21:03.804 "state": "enabled", 00:21:03.804 "thread": "nvmf_tgt_poll_group_000", 00:21:03.804 "listen_address": { 00:21:03.804 "trtype": "TCP", 00:21:03.804 "adrfam": "IPv4", 00:21:03.804 "traddr": "10.0.0.2", 00:21:03.804 "trsvcid": "4420" 00:21:03.804 }, 00:21:03.804 "peer_address": { 00:21:03.804 "trtype": "TCP", 00:21:03.804 "adrfam": "IPv4", 00:21:03.804 "traddr": "10.0.0.1", 00:21:03.804 "trsvcid": "46986" 00:21:03.804 }, 00:21:03.804 "auth": { 00:21:03.804 "state": "completed", 00:21:03.804 "digest": "sha512", 00:21:03.804 "dhgroup": "ffdhe8192" 00:21:03.804 } 00:21:03.804 } 00:21:03.804 ]' 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.804 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.061 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.061 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.061 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.061 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.061 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.319 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2U5OTJmNjhlNWJhNWMwOTE3MTYzODlhMDE4OTFkOWOBZlZO: --dhchap-ctrl-secret DHHC-1:02:Yjg2NTk4ZGI0ZTJiOGVmOWIyZTVkNGI4MGQyNzVmZGY5N2Q3MzdmODIyZmJhZDFmX5chbw==: 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.279 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.537 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.469 00:21:06.469 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.469 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.469 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.727 { 00:21:06.727 "cntlid": 141, 00:21:06.727 "qid": 0, 00:21:06.727 "state": "enabled", 00:21:06.727 "thread": "nvmf_tgt_poll_group_000", 00:21:06.727 "listen_address": { 00:21:06.727 "trtype": "TCP", 00:21:06.727 "adrfam": "IPv4", 00:21:06.727 "traddr": "10.0.0.2", 00:21:06.727 "trsvcid": "4420" 00:21:06.727 }, 00:21:06.727 "peer_address": { 00:21:06.727 "trtype": "TCP", 00:21:06.727 "adrfam": "IPv4", 00:21:06.727 "traddr": "10.0.0.1", 00:21:06.727 "trsvcid": "39646" 00:21:06.727 }, 00:21:06.727 "auth": { 00:21:06.727 "state": "completed", 00:21:06.727 "digest": "sha512", 00:21:06.727 "dhgroup": "ffdhe8192" 00:21:06.727 } 00:21:06.727 } 00:21:06.727 ]' 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.727 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.985 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmQxNmVlOGU2YTM4YjZkZDI4MzczZDVjOWI0ZDY2ODZlNDQzNTY4NTFkZDJmNGU1DcWXhQ==: --dhchap-ctrl-secret DHHC-1:01:NzcwMDNmMTFlZDExYjMxNGM4ZjhhMjllZTlhZGNjODSorDFH: 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.917 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.174 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.104 00:21:09.104 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.104 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.104 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.361 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.361 { 00:21:09.361 "cntlid": 143, 00:21:09.361 "qid": 0, 00:21:09.361 "state": "enabled", 00:21:09.361 "thread": "nvmf_tgt_poll_group_000", 00:21:09.361 "listen_address": { 00:21:09.361 "trtype": "TCP", 00:21:09.361 "adrfam": "IPv4", 00:21:09.361 "traddr": "10.0.0.2", 00:21:09.361 "trsvcid": "4420" 00:21:09.361 }, 00:21:09.362 "peer_address": { 00:21:09.362 "trtype": "TCP", 00:21:09.362 "adrfam": "IPv4", 00:21:09.362 "traddr": "10.0.0.1", 00:21:09.362 "trsvcid": "39682" 00:21:09.362 }, 00:21:09.362 "auth": { 00:21:09.362 "state": "completed", 00:21:09.362 "digest": "sha512", 00:21:09.362 "dhgroup": "ffdhe8192" 00:21:09.362 } 00:21:09.362 } 00:21:09.362 ]' 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.362 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.618 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:21:10.549 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.807 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.807 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.807 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.807 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.807 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:10.808 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:10.808 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:10.808 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:10.808 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:10.808 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.065 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.066 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.997 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.997 { 00:21:11.997 "cntlid": 145, 00:21:11.997 "qid": 0, 00:21:11.997 "state": "enabled", 00:21:11.997 "thread": "nvmf_tgt_poll_group_000", 00:21:11.997 "listen_address": { 00:21:11.997 "trtype": "TCP", 00:21:11.997 "adrfam": "IPv4", 00:21:11.997 "traddr": "10.0.0.2", 00:21:11.997 "trsvcid": "4420" 00:21:11.997 }, 00:21:11.997 "peer_address": { 00:21:11.997 "trtype": "TCP", 00:21:11.997 "adrfam": "IPv4", 00:21:11.997 "traddr": "10.0.0.1", 00:21:11.997 "trsvcid": "39722" 00:21:11.997 }, 00:21:11.997 "auth": { 00:21:11.997 "state": "completed", 00:21:11.997 "digest": "sha512", 00:21:11.997 "dhgroup": "ffdhe8192" 00:21:11.997 } 00:21:11.997 } 00:21:11.997 ]' 00:21:11.997 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.254 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.511 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MThhZWE5NzEyM2JhOTBmOWNjYTdjOGRjMDFjMTc5Nzg1ZDlhODRmYTk5N2ZiMDc5PzWPjg==: --dhchap-ctrl-secret DHHC-1:03:NzhhMmQ4MWIyMzllNTQ4MGVhNWRlOTI4NWY2YzMzNTQ4MGIzNTA0YzJiMjFjYWFlOTFhN2U2NGQ1MzMwODVmNdM86Hg=: 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:13.442 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.443 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.443 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:14.375 request: 00:21:14.375 { 00:21:14.375 "name": "nvme0", 00:21:14.375 "trtype": "tcp", 00:21:14.375 "traddr": "10.0.0.2", 00:21:14.375 "adrfam": "ipv4", 00:21:14.375 "trsvcid": "4420", 00:21:14.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.375 "prchk_reftag": false, 00:21:14.375 "prchk_guard": false, 00:21:14.375 "hdgst": false, 00:21:14.375 "ddgst": false, 00:21:14.375 "dhchap_key": "key2", 00:21:14.375 "method": "bdev_nvme_attach_controller", 00:21:14.376 "req_id": 1 00:21:14.376 } 00:21:14.376 Got JSON-RPC error response 00:21:14.376 response: 00:21:14.376 { 00:21:14.376 "code": -5, 00:21:14.376 "message": "Input/output error" 00:21:14.376 } 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.376 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:15.307 request: 00:21:15.307 { 00:21:15.307 "name": "nvme0", 00:21:15.307 "trtype": "tcp", 00:21:15.307 "traddr": "10.0.0.2", 00:21:15.307 "adrfam": "ipv4", 00:21:15.307 "trsvcid": "4420", 00:21:15.307 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:15.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.307 "prchk_reftag": false, 00:21:15.307 "prchk_guard": false, 00:21:15.307 "hdgst": false, 00:21:15.307 "ddgst": false, 00:21:15.307 "dhchap_key": "key1", 00:21:15.307 "dhchap_ctrlr_key": "ckey2", 00:21:15.307 "method": "bdev_nvme_attach_controller", 00:21:15.307 "req_id": 1 00:21:15.307 } 00:21:15.307 Got JSON-RPC error response 00:21:15.307 response: 00:21:15.307 { 00:21:15.307 "code": -5, 00:21:15.307 "message": "Input/output error" 00:21:15.307 } 00:21:15.307 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:15.307 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.307 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:15.307 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.308 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.241 request: 00:21:16.241 { 00:21:16.241 "name": "nvme0", 00:21:16.241 "trtype": "tcp", 00:21:16.241 "traddr": "10.0.0.2", 00:21:16.241 "adrfam": "ipv4", 00:21:16.241 "trsvcid": "4420", 00:21:16.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:16.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.241 "prchk_reftag": false, 00:21:16.241 "prchk_guard": false, 00:21:16.241 "hdgst": false, 00:21:16.241 "ddgst": false, 00:21:16.241 "dhchap_key": "key1", 00:21:16.241 "dhchap_ctrlr_key": "ckey1", 00:21:16.241 "method": "bdev_nvme_attach_controller", 00:21:16.241 "req_id": 1 00:21:16.241 } 00:21:16.241 Got JSON-RPC error response 00:21:16.241 response: 00:21:16.241 { 00:21:16.241 "code": -5, 00:21:16.241 "message": "Input/output error" 00:21:16.241 } 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 834329 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 834329 ']' 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 834329 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834329 00:21:16.241 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:16.242 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:16.242 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834329' 00:21:16.242 killing process with pid 834329 00:21:16.242 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 834329 00:21:16.242 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 834329 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=856705 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 856705 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 856705 ']' 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.500 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 856705 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 856705 ']' 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.758 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.016 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.949 00:21:17.949 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.949 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.949 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.206 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.206 { 00:21:18.206 "cntlid": 1, 00:21:18.206 "qid": 0, 00:21:18.206 "state": "enabled", 00:21:18.206 "thread": "nvmf_tgt_poll_group_000", 00:21:18.206 "listen_address": { 00:21:18.206 "trtype": "TCP", 00:21:18.206 "adrfam": "IPv4", 00:21:18.206 "traddr": "10.0.0.2", 00:21:18.206 "trsvcid": "4420" 00:21:18.206 }, 00:21:18.206 "peer_address": { 00:21:18.206 "trtype": "TCP", 00:21:18.206 "adrfam": "IPv4", 00:21:18.206 "traddr": "10.0.0.1", 00:21:18.207 "trsvcid": "34376" 00:21:18.207 }, 00:21:18.207 "auth": { 00:21:18.207 "state": "completed", 00:21:18.207 "digest": "sha512", 00:21:18.207 "dhgroup": "ffdhe8192" 00:21:18.207 } 00:21:18.207 } 00:21:18.207 ]' 00:21:18.207 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.207 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.207 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.207 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.207 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.464 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.464 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.464 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.721 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDZlYjNmODUwNWU0MjY4YzQ1MTFkMjgzNmQ2Yzg5NWJlYzU4ZDI4OTI3ZTRiYTE4NmY3NDhhNjEyZjNkM2Q0NV07O7k=: 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.654 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.655 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:19.655 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.911 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.168 request: 00:21:20.168 { 00:21:20.169 "name": "nvme0", 00:21:20.169 "trtype": "tcp", 00:21:20.169 "traddr": "10.0.0.2", 00:21:20.169 "adrfam": "ipv4", 00:21:20.169 "trsvcid": "4420", 00:21:20.169 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.169 "prchk_reftag": false, 00:21:20.169 "prchk_guard": false, 00:21:20.169 "hdgst": false, 00:21:20.169 "ddgst": false, 00:21:20.169 "dhchap_key": "key3", 00:21:20.169 "method": "bdev_nvme_attach_controller", 00:21:20.169 "req_id": 1 00:21:20.169 } 00:21:20.169 Got JSON-RPC error response 00:21:20.169 response: 00:21:20.169 { 00:21:20.169 "code": -5, 00:21:20.169 "message": "Input/output error" 00:21:20.169 } 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:20.169 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.426 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:20.427 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.427 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.427 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.684 request: 00:21:20.684 { 00:21:20.684 "name": "nvme0", 00:21:20.684 "trtype": "tcp", 00:21:20.684 "traddr": "10.0.0.2", 00:21:20.684 "adrfam": "ipv4", 00:21:20.684 "trsvcid": "4420", 00:21:20.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.684 "prchk_reftag": false, 00:21:20.684 "prchk_guard": false, 00:21:20.684 "hdgst": false, 00:21:20.684 "ddgst": false, 00:21:20.684 "dhchap_key": "key3", 00:21:20.684 "method": "bdev_nvme_attach_controller", 00:21:20.684 "req_id": 1 00:21:20.684 } 00:21:20.684 Got JSON-RPC error response 00:21:20.684 response: 00:21:20.684 { 00:21:20.684 "code": -5, 00:21:20.684 "message": "Input/output error" 00:21:20.684 } 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.684 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.942 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:20.943 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.200 request: 00:21:21.200 { 00:21:21.200 "name": "nvme0", 00:21:21.200 "trtype": "tcp", 00:21:21.200 "traddr": "10.0.0.2", 00:21:21.200 "adrfam": "ipv4", 00:21:21.200 "trsvcid": "4420", 00:21:21.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.201 "prchk_reftag": false, 00:21:21.201 "prchk_guard": false, 00:21:21.201 "hdgst": false, 00:21:21.201 "ddgst": false, 00:21:21.201 "dhchap_key": "key0", 00:21:21.201 "dhchap_ctrlr_key": "key1", 00:21:21.201 "method": "bdev_nvme_attach_controller", 00:21:21.201 "req_id": 1 00:21:21.201 } 00:21:21.201 Got JSON-RPC error response 00:21:21.201 response: 00:21:21.201 { 00:21:21.201 "code": -5, 00:21:21.201 "message": "Input/output error" 00:21:21.201 } 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:21.201 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:21.468 00:21:21.468 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:21.468 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:21.468 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.777 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.777 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.777 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 834355 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 834355 ']' 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 834355 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834355 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834355' 00:21:22.035 killing process with pid 834355 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 834355 00:21:22.035 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 834355 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.293 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.293 rmmod nvme_tcp 00:21:22.551 rmmod nvme_fabrics 00:21:22.551 rmmod nvme_keyring 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 856705 ']' 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 856705 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 856705 ']' 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 856705 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 856705 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 856705' 00:21:22.551 killing process with pid 856705 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 856705 00:21:22.551 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 856705 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.809 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bDP /tmp/spdk.key-sha256.Enj /tmp/spdk.key-sha384.4RY /tmp/spdk.key-sha512.Vfd /tmp/spdk.key-sha512.dkJ /tmp/spdk.key-sha384.JlB /tmp/spdk.key-sha256.UaH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:24.709 00:21:24.709 real 3m7.983s 00:21:24.709 user 7m17.314s 00:21:24.709 sys 0m24.848s 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.709 ************************************ 00:21:24.709 END TEST nvmf_auth_target 00:21:24.709 ************************************ 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.709 ************************************ 00:21:24.709 START TEST nvmf_bdevio_no_huge 00:21:24.709 ************************************ 00:21:24.709 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:24.968 * Looking for test storage... 00:21:24.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.968 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:27.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:27.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:27.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:27.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.498 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:27.498 00:21:27.498 --- 10.0.0.2 ping statistics --- 00:21:27.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.498 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:27.499 00:21:27.499 --- 10.0.0.1 ping statistics --- 00:21:27.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.499 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=859471 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 859471 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 859471 ']' 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 [2024-07-25 04:04:42.395118] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:21:27.499 [2024-07-25 04:04:42.395197] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:27.499 [2024-07-25 04:04:42.452759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:27.499 [2024-07-25 04:04:42.474468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.499 [2024-07-25 04:04:42.561050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.499 [2024-07-25 04:04:42.561106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.499 [2024-07-25 04:04:42.561120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.499 [2024-07-25 04:04:42.561131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.499 [2024-07-25 04:04:42.561141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.499 [2024-07-25 04:04:42.561271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.499 [2024-07-25 04:04:42.561365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:27.499 [2024-07-25 04:04:42.561415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:27.499 [2024-07-25 04:04:42.561418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 [2024-07-25 04:04:42.685447] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 Malloc0 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.499 [2024-07-25 04:04:42.723930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.499 { 00:21:27.499 "params": { 00:21:27.499 "name": "Nvme$subsystem", 00:21:27.499 "trtype": "$TEST_TRANSPORT", 00:21:27.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.499 "adrfam": "ipv4", 00:21:27.499 "trsvcid": "$NVMF_PORT", 00:21:27.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.499 "hdgst": ${hdgst:-false}, 00:21:27.499 "ddgst": ${ddgst:-false} 00:21:27.499 }, 00:21:27.499 "method": "bdev_nvme_attach_controller" 00:21:27.499 } 00:21:27.499 EOF 00:21:27.499 )") 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:27.499 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:27.499 "params": { 00:21:27.499 "name": "Nvme1", 00:21:27.499 "trtype": "tcp", 00:21:27.499 "traddr": "10.0.0.2", 00:21:27.499 "adrfam": "ipv4", 00:21:27.499 "trsvcid": "4420", 00:21:27.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.499 "hdgst": false, 00:21:27.499 "ddgst": false 00:21:27.499 }, 00:21:27.499 "method": "bdev_nvme_attach_controller" 00:21:27.499 }' 00:21:27.499 [2024-07-25 04:04:42.772481] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:21:27.499 [2024-07-25 04:04:42.772576] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid859505 ] 00:21:27.758 [2024-07-25 04:04:42.812838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:27.758 [2024-07-25 04:04:42.832523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.758 [2024-07-25 04:04:42.920313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.758 [2024-07-25 04:04:42.920363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.758 [2024-07-25 04:04:42.920367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.016 I/O targets: 00:21:28.016 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:28.016 00:21:28.016 00:21:28.016 CUnit - A unit testing framework for C - Version 2.1-3 00:21:28.016 http://cunit.sourceforge.net/ 00:21:28.016 00:21:28.016 00:21:28.016 Suite: bdevio tests on: Nvme1n1 00:21:28.016 Test: blockdev write read block ...passed 00:21:28.016 Test: blockdev write zeroes read block ...passed 00:21:28.016 Test: blockdev write zeroes read no split ...passed 00:21:28.016 Test: blockdev write zeroes read split ...passed 00:21:28.272 Test: blockdev write zeroes read split partial ...passed 00:21:28.272 Test: blockdev reset ...[2024-07-25 04:04:43.330767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:28.272 [2024-07-25 04:04:43.330883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351330 (9): Bad file descriptor 00:21:28.272 [2024-07-25 04:04:43.388061] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:28.272 passed 00:21:28.272 Test: blockdev write read 8 blocks ...passed 00:21:28.272 Test: blockdev write read size > 128k ...passed 00:21:28.272 Test: blockdev write read invalid size ...passed 00:21:28.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:28.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:28.272 Test: blockdev write read max offset ...passed 00:21:28.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:28.272 Test: blockdev writev readv 8 blocks ...passed 00:21:28.272 Test: blockdev writev readv 30 x 1block ...passed 00:21:28.528 Test: blockdev writev readv block ...passed 00:21:28.528 Test: blockdev writev readv size > 128k ...passed 00:21:28.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:28.529 Test: blockdev comparev and writev ...[2024-07-25 04:04:43.604372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.604413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.604456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.604862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.604887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.604910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.604926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.605315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.605340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.605362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.605377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.605725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.605749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.605771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.529 [2024-07-25 04:04:43.605788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:28.529 passed 00:21:28.529 Test: blockdev nvme passthru rw ...passed 00:21:28.529 Test: blockdev nvme passthru vendor specific ...[2024-07-25 04:04:43.689567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.529 [2024-07-25 04:04:43.689596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.689773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.529 [2024-07-25 04:04:43.689796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.689964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.529 [2024-07-25 04:04:43.689987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:28.529 [2024-07-25 04:04:43.690159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.529 [2024-07-25 04:04:43.690182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:28.529 passed 00:21:28.529 Test: blockdev nvme admin passthru ...passed 00:21:28.529 Test: blockdev copy ...passed 00:21:28.529 00:21:28.529 Run Summary: Type Total Ran Passed Failed Inactive 00:21:28.529 suites 1 1 n/a 0 0 00:21:28.529 tests 23 23 23 0 0 00:21:28.529 asserts 152 152 152 0 n/a 00:21:28.529 00:21:28.529 Elapsed time = 1.249 seconds 00:21:28.786 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.786 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.786 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.044 rmmod nvme_tcp 00:21:29.044 rmmod nvme_fabrics 00:21:29.044 rmmod nvme_keyring 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:29.044 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 859471 ']' 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 859471 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 859471 ']' 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 859471 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 859471 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 859471' 00:21:29.045 killing process with pid 859471 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 859471 00:21:29.045 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 859471 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.303 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.834 00:21:31.834 real 0m6.605s 00:21:31.834 user 0m10.628s 00:21:31.834 sys 0m2.592s 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:31.834 ************************************ 00:21:31.834 END TEST nvmf_bdevio_no_huge 00:21:31.834 ************************************ 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.834 ************************************ 00:21:31.834 START TEST nvmf_tls 00:21:31.834 ************************************ 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:31.834 * Looking for test storage... 00:21:31.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.834 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.835 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:33.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:33.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:33.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.749 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:33.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:21:33.750 00:21:33.750 --- 10.0.0.2 ping statistics --- 00:21:33.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.750 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:33.750 00:21:33.750 --- 10.0.0.1 ping statistics --- 00:21:33.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.750 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=861573 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 861573 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 861573 ']' 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.750 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.750 [2024-07-25 04:04:48.815522] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:21:33.750 [2024-07-25 04:04:48.815608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.750 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.750 [2024-07-25 04:04:48.857489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:33.750 [2024-07-25 04:04:48.888303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.750 [2024-07-25 04:04:48.979641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.750 [2024-07-25 04:04:48.979705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.750 [2024-07-25 04:04:48.979733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.750 [2024-07-25 04:04:48.979746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.750 [2024-07-25 04:04:48.979759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.750 [2024-07-25 04:04:48.979787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.750 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.750 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:33.750 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.750 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.750 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.008 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.008 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:34.008 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:34.265 true 00:21:34.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:34.523 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:34.523 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:34.523 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:34.780 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.780 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:35.038 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:35.038 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:35.038 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:35.295 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.295 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:35.553 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:35.553 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:35.553 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.553 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:35.810 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:35.810 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:35.810 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:36.068 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.068 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:36.326 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:36.326 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:36.326 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:36.584 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.584 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:36.842 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.k9KbtF79su 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3hM0slHT6i 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.k9KbtF79su 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3hM0slHT6i 00:21:36.842 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:37.100 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:37.666 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.k9KbtF79su 00:21:37.666 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k9KbtF79su 00:21:37.666 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:37.667 [2024-07-25 04:04:52.928376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.667 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:37.924 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:38.182 [2024-07-25 04:04:53.425716] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.182 [2024-07-25 04:04:53.425978] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.182 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:38.440 malloc0 00:21:38.440 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:38.698 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k9KbtF79su 00:21:38.955 [2024-07-25 04:04:54.142785] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:38.955 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.k9KbtF79su 00:21:38.955 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.172 Initializing NVMe Controllers 00:21:51.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.172 Initialization complete. Launching workers. 00:21:51.172 ======================================================== 00:21:51.172 Latency(us) 00:21:51.172 Device Information : IOPS MiB/s Average min max 00:21:51.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7816.16 30.53 8190.46 1185.18 10723.97 00:21:51.172 ======================================================== 00:21:51.172 Total : 7816.16 30.53 8190.46 1185.18 10723.97 00:21:51.172 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k9KbtF79su 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k9KbtF79su' 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=863467 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 863467 /var/tmp/bdevperf.sock 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 863467 ']' 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.172 [2024-07-25 04:05:04.330163] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:21:51.172 [2024-07-25 04:05:04.330278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863467 ] 00:21:51.172 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.172 [2024-07-25 04:05:04.363141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:51.172 [2024-07-25 04:05:04.391143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.172 [2024-07-25 04:05:04.476826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k9KbtF79su 00:21:51.172 [2024-07-25 04:05:04.823065] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.172 [2024-07-25 04:05:04.823192] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.172 TLSTESTn1 00:21:51.172 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.172 Running I/O for 10 seconds... 00:22:01.132 00:22:01.132 Latency(us) 00:22:01.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.132 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.132 Verification LBA range: start 0x0 length 0x2000 00:22:01.132 TLSTESTn1 : 10.04 3393.98 13.26 0.00 0.00 37620.59 5825.42 53205.52 00:22:01.132 =================================================================================================================== 00:22:01.132 Total : 3393.98 13.26 0.00 0.00 37620.59 5825.42 53205.52 00:22:01.132 0 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 863467 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 863467 ']' 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 863467 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 863467 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 863467' 00:22:01.132 killing process with pid 863467 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 863467 00:22:01.132 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.132 00:22:01.132 Latency(us) 00:22:01.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.132 =================================================================================================================== 00:22:01.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.132 [2024-07-25 04:05:15.144586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 863467 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hM0slHT6i 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hM0slHT6i 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hM0slHT6i 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3hM0slHT6i' 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=864780 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 864780 /var/tmp/bdevperf.sock 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 864780 ']' 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.132 [2024-07-25 04:05:15.420799] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:01.132 [2024-07-25 04:05:15.420889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864780 ] 00:22:01.132 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.132 [2024-07-25 04:05:15.452257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:01.132 [2024-07-25 04:05:15.480032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.132 [2024-07-25 04:05:15.562959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.132 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3hM0slHT6i 00:22:01.132 [2024-07-25 04:05:15.918097] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.132 [2024-07-25 04:05:15.918239] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:01.132 [2024-07-25 04:05:15.923790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:01.132 [2024-07-25 04:05:15.924210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6658d0 (107): Transport endpoint is not connected 00:22:01.132 [2024-07-25 04:05:15.925185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6658d0 (9): Bad file descriptor 00:22:01.132 [2024-07-25 04:05:15.926183] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.132 [2024-07-25 04:05:15.926203] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:01.132 [2024-07-25 04:05:15.926220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.132 request: 00:22:01.132 { 00:22:01.132 "name": "TLSTEST", 00:22:01.132 "trtype": "tcp", 00:22:01.132 "traddr": "10.0.0.2", 00:22:01.132 "adrfam": "ipv4", 00:22:01.132 "trsvcid": "4420", 00:22:01.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.132 "prchk_reftag": false, 00:22:01.132 "prchk_guard": false, 00:22:01.132 "hdgst": false, 00:22:01.132 "ddgst": false, 00:22:01.132 "psk": "/tmp/tmp.3hM0slHT6i", 00:22:01.132 "method": "bdev_nvme_attach_controller", 00:22:01.132 "req_id": 1 00:22:01.132 } 00:22:01.133 Got JSON-RPC error response 00:22:01.133 response: 00:22:01.133 { 00:22:01.133 "code": -5, 00:22:01.133 "message": "Input/output error" 00:22:01.133 } 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 864780 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 864780 ']' 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 864780 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864780 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864780' 00:22:01.133 killing process with pid 864780 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 864780 00:22:01.133 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.133 00:22:01.133 Latency(us) 00:22:01.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.133 =================================================================================================================== 00:22:01.133 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.133 [2024-07-25 04:05:15.977368] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.133 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 864780 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.k9KbtF79su 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.k9KbtF79su 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.k9KbtF79su 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k9KbtF79su' 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=864851 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 864851 /var/tmp/bdevperf.sock 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 864851 ']' 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.133 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.133 [2024-07-25 04:05:16.237809] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:01.133 [2024-07-25 04:05:16.237891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864851 ] 00:22:01.133 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.133 [2024-07-25 04:05:16.270533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:01.133 [2024-07-25 04:05:16.297963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.133 [2024-07-25 04:05:16.380986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.391 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.391 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.391 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.k9KbtF79su 00:22:01.649 [2024-07-25 04:05:16.714018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.649 [2024-07-25 04:05:16.714135] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:01.649 [2024-07-25 04:05:16.725417] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:01.649 [2024-07-25 04:05:16.725447] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:01.649 [2024-07-25 04:05:16.725507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:01.650 [2024-07-25 04:05:16.725832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28d0 (107): Transport endpoint is not connected 00:22:01.650 [2024-07-25 04:05:16.726820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28d0 (9): Bad file descriptor 00:22:01.650 [2024-07-25 04:05:16.727819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.650 [2024-07-25 04:05:16.727837] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:01.650 [2024-07-25 04:05:16.727865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.650 request: 00:22:01.650 { 00:22:01.650 "name": "TLSTEST", 00:22:01.650 "trtype": "tcp", 00:22:01.650 "traddr": "10.0.0.2", 00:22:01.650 "adrfam": "ipv4", 00:22:01.650 "trsvcid": "4420", 00:22:01.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.650 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.650 "prchk_reftag": false, 00:22:01.650 "prchk_guard": false, 00:22:01.650 "hdgst": false, 00:22:01.650 "ddgst": false, 00:22:01.650 "psk": "/tmp/tmp.k9KbtF79su", 00:22:01.650 "method": "bdev_nvme_attach_controller", 00:22:01.650 "req_id": 1 00:22:01.650 } 00:22:01.650 Got JSON-RPC error response 00:22:01.650 response: 00:22:01.650 { 00:22:01.650 "code": -5, 00:22:01.650 "message": "Input/output error" 00:22:01.650 } 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 864851 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 864851 ']' 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 864851 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864851 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864851' 00:22:01.650 killing process with pid 864851 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 864851 00:22:01.650 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.650 00:22:01.650 Latency(us) 00:22:01.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.650 =================================================================================================================== 00:22:01.650 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.650 [2024-07-25 04:05:16.769356] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.650 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 864851 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.k9KbtF79su 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.k9KbtF79su 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.k9KbtF79su 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k9KbtF79su' 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=864932 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 864932 /var/tmp/bdevperf.sock 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 864932 ']' 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.908 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.908 [2024-07-25 04:05:17.009410] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:01.908 [2024-07-25 04:05:17.009487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864932 ] 00:22:01.908 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.908 [2024-07-25 04:05:17.046926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:01.908 [2024-07-25 04:05:17.073075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.908 [2024-07-25 04:05:17.156181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.166 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.166 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.166 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k9KbtF79su 00:22:02.424 [2024-07-25 04:05:17.489733] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.424 [2024-07-25 04:05:17.489843] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.424 [2024-07-25 04:05:17.497460] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.424 [2024-07-25 04:05:17.497490] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.424 [2024-07-25 04:05:17.497542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.424 [2024-07-25 04:05:17.497635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21558d0 (107): Transport endpoint is not connected 00:22:02.424 [2024-07-25 04:05:17.498567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21558d0 (9): Bad file descriptor 00:22:02.424 [2024-07-25 04:05:17.499565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:02.424 [2024-07-25 04:05:17.499586] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.424 [2024-07-25 04:05:17.499602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:02.424 request: 00:22:02.424 { 00:22:02.424 "name": "TLSTEST", 00:22:02.424 "trtype": "tcp", 00:22:02.424 "traddr": "10.0.0.2", 00:22:02.424 "adrfam": "ipv4", 00:22:02.424 "trsvcid": "4420", 00:22:02.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.424 "prchk_reftag": false, 00:22:02.424 "prchk_guard": false, 00:22:02.424 "hdgst": false, 00:22:02.424 "ddgst": false, 00:22:02.424 "psk": "/tmp/tmp.k9KbtF79su", 00:22:02.424 "method": "bdev_nvme_attach_controller", 00:22:02.424 "req_id": 1 00:22:02.424 } 00:22:02.424 Got JSON-RPC error response 00:22:02.424 response: 00:22:02.424 { 00:22:02.424 "code": -5, 00:22:02.424 "message": "Input/output error" 00:22:02.424 } 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 864932 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 864932 ']' 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 864932 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864932 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864932' 00:22:02.424 killing process with pid 864932 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 864932 00:22:02.424 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.424 00:22:02.424 Latency(us) 00:22:02.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.424 =================================================================================================================== 00:22:02.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.424 [2024-07-25 04:05:17.549516] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.424 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 864932 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=865068 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 865068 /var/tmp/bdevperf.sock 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 865068 ']' 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.683 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.683 [2024-07-25 04:05:17.813100] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:02.683 [2024-07-25 04:05:17.813190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865068 ] 00:22:02.683 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.683 [2024-07-25 04:05:17.844378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.683 [2024-07-25 04:05:17.871200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.683 [2024-07-25 04:05:17.952122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.941 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.941 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.941 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.199 [2024-07-25 04:05:18.290171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.199 [2024-07-25 04:05:18.292093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1737de0 (9): Bad file descriptor 00:22:03.199 [2024-07-25 04:05:18.293088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.199 [2024-07-25 04:05:18.293108] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.199 [2024-07-25 04:05:18.293135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.199 request: 00:22:03.199 { 00:22:03.199 "name": "TLSTEST", 00:22:03.199 "trtype": "tcp", 00:22:03.199 "traddr": "10.0.0.2", 00:22:03.199 "adrfam": "ipv4", 00:22:03.199 "trsvcid": "4420", 00:22:03.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.199 "prchk_reftag": false, 00:22:03.199 "prchk_guard": false, 00:22:03.199 "hdgst": false, 00:22:03.199 "ddgst": false, 00:22:03.199 "method": "bdev_nvme_attach_controller", 00:22:03.199 "req_id": 1 00:22:03.199 } 00:22:03.199 Got JSON-RPC error response 00:22:03.199 response: 00:22:03.199 { 00:22:03.199 "code": -5, 00:22:03.199 "message": "Input/output error" 00:22:03.199 } 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 865068 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 865068 ']' 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 865068 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865068 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865068' 00:22:03.199 killing process with pid 865068 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 865068 00:22:03.199 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.199 00:22:03.199 Latency(us) 00:22:03.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.199 =================================================================================================================== 00:22:03.199 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.199 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 865068 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 861573 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 861573 ']' 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 861573 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 861573 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 861573' 00:22:03.457 killing process with pid 861573 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 861573 00:22:03.457 [2024-07-25 04:05:18.557381] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.457 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 861573 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.SZd6Lb4Q9X 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.SZd6Lb4Q9X 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=865218 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 865218 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 865218 ']' 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.716 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.716 [2024-07-25 04:05:18.869996] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:03.716 [2024-07-25 04:05:18.870090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.716 [2024-07-25 04:05:18.905414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.716 [2024-07-25 04:05:18.936835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.974 [2024-07-25 04:05:19.032348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.974 [2024-07-25 04:05:19.032403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.974 [2024-07-25 04:05:19.032418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.974 [2024-07-25 04:05:19.032430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.974 [2024-07-25 04:05:19.032439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.974 [2024-07-25 04:05:19.032469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SZd6Lb4Q9X 00:22:03.974 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.232 [2024-07-25 04:05:19.408603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.232 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.490 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.747 [2024-07-25 04:05:19.893966] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.748 [2024-07-25 04:05:19.894220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.748 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.006 malloc0 00:22:05.006 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.264 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:05.522 [2024-07-25 04:05:20.638596] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SZd6Lb4Q9X 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SZd6Lb4Q9X' 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=865435 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 865435 /var/tmp/bdevperf.sock 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 865435 ']' 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.522 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.522 [2024-07-25 04:05:20.702827] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:05.522 [2024-07-25 04:05:20.702916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865435 ] 00:22:05.522 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.522 [2024-07-25 04:05:20.740124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:05.522 [2024-07-25 04:05:20.768768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.781 [2024-07-25 04:05:20.858936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.781 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.781 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.781 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:06.038 [2024-07-25 04:05:21.241020] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.038 [2024-07-25 04:05:21.241138] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.296 TLSTESTn1 00:22:06.296 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.296 Running I/O for 10 seconds... 00:22:16.263 00:22:16.263 Latency(us) 00:22:16.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.263 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:16.263 Verification LBA range: start 0x0 length 0x2000 00:22:16.263 TLSTESTn1 : 10.03 3118.50 12.18 0.00 0.00 40953.90 5995.33 72623.60 00:22:16.263 =================================================================================================================== 00:22:16.263 Total : 3118.50 12.18 0.00 0.00 40953.90 5995.33 72623.60 00:22:16.263 0 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 865435 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 865435 ']' 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 865435 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.263 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865435 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865435' 00:22:16.521 killing process with pid 865435 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 865435 00:22:16.521 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.521 00:22:16.521 Latency(us) 00:22:16.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.521 =================================================================================================================== 00:22:16.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.521 [2024-07-25 04:05:31.580393] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 865435 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.SZd6Lb4Q9X 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SZd6Lb4Q9X 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SZd6Lb4Q9X 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SZd6Lb4Q9X 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SZd6Lb4Q9X' 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=866697 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 866697 /var/tmp/bdevperf.sock 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 866697 ']' 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.521 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.779 [2024-07-25 04:05:31.858771] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:16.779 [2024-07-25 04:05:31.858865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866697 ] 00:22:16.779 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.779 [2024-07-25 04:05:31.890585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.779 [2024-07-25 04:05:31.918634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.779 [2024-07-25 04:05:32.010587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.038 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.038 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.038 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:17.324 [2024-07-25 04:05:32.341787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.324 [2024-07-25 04:05:32.341873] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:17.324 [2024-07-25 04:05:32.341892] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.SZd6Lb4Q9X 00:22:17.324 request: 00:22:17.324 { 00:22:17.324 "name": "TLSTEST", 00:22:17.324 "trtype": "tcp", 00:22:17.324 "traddr": "10.0.0.2", 00:22:17.324 "adrfam": "ipv4", 00:22:17.324 "trsvcid": "4420", 00:22:17.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.324 "prchk_reftag": false, 00:22:17.324 "prchk_guard": false, 00:22:17.324 "hdgst": false, 00:22:17.324 "ddgst": false, 00:22:17.324 "psk": "/tmp/tmp.SZd6Lb4Q9X", 00:22:17.324 "method": "bdev_nvme_attach_controller", 00:22:17.324 "req_id": 1 00:22:17.324 } 00:22:17.324 Got JSON-RPC error response 00:22:17.324 response: 00:22:17.324 { 00:22:17.324 "code": -1, 00:22:17.324 "message": "Operation not permitted" 00:22:17.324 } 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 866697 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 866697 ']' 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 866697 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 866697 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 866697' 00:22:17.324 killing process with pid 866697 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 866697 00:22:17.324 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.324 00:22:17.324 Latency(us) 00:22:17.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.324 =================================================================================================================== 00:22:17.324 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.324 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 866697 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 865218 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 865218 ']' 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 865218 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865218 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865218' 00:22:17.602 killing process with pid 865218 00:22:17.602 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 865218 00:22:17.603 [2024-07-25 04:05:32.641308] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:17.603 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 865218 00:22:17.603 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:17.603 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.603 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.603 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=866841 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 866841 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 866841 ']' 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.860 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.860 [2024-07-25 04:05:32.952371] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:17.860 [2024-07-25 04:05:32.952462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.860 [2024-07-25 04:05:32.990059] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.860 [2024-07-25 04:05:33.021895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.860 [2024-07-25 04:05:33.109284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.860 [2024-07-25 04:05:33.109351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.860 [2024-07-25 04:05:33.109379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.860 [2024-07-25 04:05:33.109393] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.860 [2024-07-25 04:05:33.109406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.860 [2024-07-25 04:05:33.109444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.118 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SZd6Lb4Q9X 00:22:18.119 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.377 [2024-07-25 04:05:33.477704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.377 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.634 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:18.891 [2024-07-25 04:05:33.959005] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.891 [2024-07-25 04:05:33.959274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.891 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.147 malloc0 00:22:19.147 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.404 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:19.404 [2024-07-25 04:05:34.692393] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:19.404 [2024-07-25 04:05:34.692433] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:19.404 [2024-07-25 04:05:34.692480] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:19.404 request: 00:22:19.404 { 00:22:19.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.404 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.404 "psk": "/tmp/tmp.SZd6Lb4Q9X", 00:22:19.404 "method": "nvmf_subsystem_add_host", 00:22:19.404 "req_id": 1 00:22:19.404 } 00:22:19.404 Got JSON-RPC error response 00:22:19.404 response: 00:22:19.404 { 00:22:19.404 "code": -32603, 00:22:19.404 "message": "Internal error" 00:22:19.404 } 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 866841 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 866841 ']' 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 866841 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 866841 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 866841' 00:22:19.661 killing process with pid 866841 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 866841 00:22:19.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 866841 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.SZd6Lb4Q9X 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=867131 00:22:19.919 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.919 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 867131 00:22:19.919 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 867131 ']' 00:22:19.919 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.919 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.919 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.920 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.920 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.920 [2024-07-25 04:05:35.048332] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:19.920 [2024-07-25 04:05:35.048433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.920 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.920 [2024-07-25 04:05:35.085674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.920 [2024-07-25 04:05:35.117286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.920 [2024-07-25 04:05:35.203615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.920 [2024-07-25 04:05:35.203678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.920 [2024-07-25 04:05:35.203704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.920 [2024-07-25 04:05:35.203727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.920 [2024-07-25 04:05:35.203740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.920 [2024-07-25 04:05:35.203779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SZd6Lb4Q9X 00:22:20.177 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.435 [2024-07-25 04:05:35.583533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.435 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:20.693 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:20.950 [2024-07-25 04:05:36.068813] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.950 [2024-07-25 04:05:36.069080] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.950 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.207 malloc0 00:22:21.207 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.465 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:21.722 [2024-07-25 04:05:36.826624] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=867415 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 867415 /var/tmp/bdevperf.sock 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 867415 ']' 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.722 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.722 [2024-07-25 04:05:36.882593] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:21.722 [2024-07-25 04:05:36.882666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867415 ] 00:22:21.722 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.722 [2024-07-25 04:05:36.914761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.722 [2024-07-25 04:05:36.940435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.979 [2024-07-25 04:05:37.026283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.979 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.979 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:21.979 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:22.237 [2024-07-25 04:05:37.371417] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.237 [2024-07-25 04:05:37.371525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.237 TLSTESTn1 00:22:22.237 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:22.803 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:22.803 "subsystems": [ 00:22:22.803 { 00:22:22.803 "subsystem": "keyring", 00:22:22.803 "config": [] 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "subsystem": "iobuf", 00:22:22.803 "config": [ 00:22:22.803 { 00:22:22.803 "method": "iobuf_set_options", 00:22:22.803 "params": { 00:22:22.803 "small_pool_count": 8192, 00:22:22.803 "large_pool_count": 1024, 00:22:22.803 "small_bufsize": 8192, 00:22:22.803 "large_bufsize": 135168 00:22:22.803 } 00:22:22.803 } 00:22:22.803 ] 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "subsystem": "sock", 00:22:22.803 "config": [ 00:22:22.803 { 00:22:22.803 "method": "sock_set_default_impl", 00:22:22.803 "params": { 00:22:22.803 "impl_name": "posix" 00:22:22.803 } 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "method": "sock_impl_set_options", 00:22:22.803 "params": { 00:22:22.803 "impl_name": "ssl", 00:22:22.803 "recv_buf_size": 4096, 00:22:22.803 "send_buf_size": 4096, 00:22:22.803 "enable_recv_pipe": true, 00:22:22.803 "enable_quickack": false, 00:22:22.803 "enable_placement_id": 0, 00:22:22.803 "enable_zerocopy_send_server": true, 00:22:22.803 "enable_zerocopy_send_client": false, 00:22:22.803 "zerocopy_threshold": 0, 00:22:22.803 "tls_version": 0, 00:22:22.803 "enable_ktls": false 00:22:22.803 } 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "method": "sock_impl_set_options", 00:22:22.803 "params": { 00:22:22.803 "impl_name": "posix", 00:22:22.803 "recv_buf_size": 2097152, 00:22:22.803 "send_buf_size": 2097152, 00:22:22.803 "enable_recv_pipe": true, 00:22:22.803 "enable_quickack": false, 00:22:22.803 "enable_placement_id": 0, 00:22:22.803 "enable_zerocopy_send_server": true, 00:22:22.803 "enable_zerocopy_send_client": false, 00:22:22.803 "zerocopy_threshold": 0, 00:22:22.803 "tls_version": 0, 00:22:22.803 "enable_ktls": false 00:22:22.803 } 00:22:22.803 } 00:22:22.803 ] 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "subsystem": "vmd", 00:22:22.803 "config": [] 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "subsystem": "accel", 00:22:22.803 "config": [ 00:22:22.803 { 00:22:22.803 "method": "accel_set_options", 00:22:22.803 "params": { 00:22:22.803 "small_cache_size": 128, 00:22:22.803 "large_cache_size": 16, 00:22:22.803 "task_count": 2048, 00:22:22.803 "sequence_count": 2048, 00:22:22.803 "buf_count": 2048 00:22:22.803 } 00:22:22.803 } 00:22:22.803 ] 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "subsystem": "bdev", 00:22:22.803 "config": [ 00:22:22.803 { 00:22:22.803 "method": "bdev_set_options", 00:22:22.803 "params": { 00:22:22.803 "bdev_io_pool_size": 65535, 00:22:22.803 "bdev_io_cache_size": 256, 00:22:22.803 "bdev_auto_examine": true, 00:22:22.803 "iobuf_small_cache_size": 128, 00:22:22.803 "iobuf_large_cache_size": 16 00:22:22.803 } 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "method": "bdev_raid_set_options", 00:22:22.803 "params": { 00:22:22.803 "process_window_size_kb": 1024, 00:22:22.803 "process_max_bandwidth_mb_sec": 0 00:22:22.803 } 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "method": "bdev_iscsi_set_options", 00:22:22.803 "params": { 00:22:22.803 "timeout_sec": 30 00:22:22.803 } 00:22:22.803 }, 00:22:22.803 { 00:22:22.803 "method": "bdev_nvme_set_options", 00:22:22.803 "params": { 00:22:22.803 "action_on_timeout": "none", 00:22:22.803 "timeout_us": 0, 00:22:22.803 "timeout_admin_us": 0, 00:22:22.803 "keep_alive_timeout_ms": 10000, 00:22:22.803 "arbitration_burst": 0, 00:22:22.803 "low_priority_weight": 0, 00:22:22.803 "medium_priority_weight": 0, 00:22:22.803 "high_priority_weight": 0, 00:22:22.803 "nvme_adminq_poll_period_us": 10000, 00:22:22.803 "nvme_ioq_poll_period_us": 0, 00:22:22.803 "io_queue_requests": 0, 00:22:22.803 "delay_cmd_submit": true, 00:22:22.803 "transport_retry_count": 4, 00:22:22.803 "bdev_retry_count": 3, 00:22:22.803 "transport_ack_timeout": 0, 00:22:22.803 "ctrlr_loss_timeout_sec": 0, 00:22:22.803 "reconnect_delay_sec": 0, 00:22:22.803 "fast_io_fail_timeout_sec": 0, 00:22:22.803 "disable_auto_failback": false, 00:22:22.803 "generate_uuids": false, 00:22:22.803 "transport_tos": 0, 00:22:22.803 "nvme_error_stat": false, 00:22:22.803 "rdma_srq_size": 0, 00:22:22.803 "io_path_stat": false, 00:22:22.803 "allow_accel_sequence": false, 00:22:22.803 "rdma_max_cq_size": 0, 00:22:22.803 "rdma_cm_event_timeout_ms": 0, 00:22:22.803 "dhchap_digests": [ 00:22:22.803 "sha256", 00:22:22.803 "sha384", 00:22:22.803 "sha512" 00:22:22.803 ], 00:22:22.803 "dhchap_dhgroups": [ 00:22:22.804 "null", 00:22:22.804 "ffdhe2048", 00:22:22.804 "ffdhe3072", 00:22:22.804 "ffdhe4096", 00:22:22.804 "ffdhe6144", 00:22:22.804 "ffdhe8192" 00:22:22.804 ] 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "bdev_nvme_set_hotplug", 00:22:22.804 "params": { 00:22:22.804 "period_us": 100000, 00:22:22.804 "enable": false 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "bdev_malloc_create", 00:22:22.804 "params": { 00:22:22.804 "name": "malloc0", 00:22:22.804 "num_blocks": 8192, 00:22:22.804 "block_size": 4096, 00:22:22.804 "physical_block_size": 4096, 00:22:22.804 "uuid": "21211c0c-d596-4a3b-b372-74e054b8230a", 00:22:22.804 "optimal_io_boundary": 0, 00:22:22.804 "md_size": 0, 00:22:22.804 "dif_type": 0, 00:22:22.804 "dif_is_head_of_md": false, 00:22:22.804 "dif_pi_format": 0 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "bdev_wait_for_examine" 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "nbd", 00:22:22.804 "config": [] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "scheduler", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "framework_set_scheduler", 00:22:22.804 "params": { 00:22:22.804 "name": "static" 00:22:22.804 } 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "nvmf", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "nvmf_set_config", 00:22:22.804 "params": { 00:22:22.804 "discovery_filter": "match_any", 00:22:22.804 "admin_cmd_passthru": { 00:22:22.804 "identify_ctrlr": false 00:22:22.804 } 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_set_max_subsystems", 00:22:22.804 "params": { 00:22:22.804 "max_subsystems": 1024 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_set_crdt", 00:22:22.804 "params": { 00:22:22.804 "crdt1": 0, 00:22:22.804 "crdt2": 0, 00:22:22.804 "crdt3": 0 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_create_transport", 00:22:22.804 "params": { 00:22:22.804 "trtype": "TCP", 00:22:22.804 "max_queue_depth": 128, 00:22:22.804 "max_io_qpairs_per_ctrlr": 127, 00:22:22.804 "in_capsule_data_size": 4096, 00:22:22.804 "max_io_size": 131072, 00:22:22.804 "io_unit_size": 131072, 00:22:22.804 "max_aq_depth": 128, 00:22:22.804 "num_shared_buffers": 511, 00:22:22.804 "buf_cache_size": 4294967295, 00:22:22.804 "dif_insert_or_strip": false, 00:22:22.804 "zcopy": false, 00:22:22.804 "c2h_success": false, 00:22:22.804 "sock_priority": 0, 00:22:22.804 "abort_timeout_sec": 1, 00:22:22.804 "ack_timeout": 0, 00:22:22.804 "data_wr_pool_size": 0 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_create_subsystem", 00:22:22.804 "params": { 00:22:22.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.804 "allow_any_host": false, 00:22:22.804 "serial_number": "SPDK00000000000001", 00:22:22.804 "model_number": "SPDK bdev Controller", 00:22:22.804 "max_namespaces": 10, 00:22:22.804 "min_cntlid": 1, 00:22:22.804 "max_cntlid": 65519, 00:22:22.804 "ana_reporting": false 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_subsystem_add_host", 00:22:22.804 "params": { 00:22:22.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.804 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.804 "psk": "/tmp/tmp.SZd6Lb4Q9X" 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_subsystem_add_ns", 00:22:22.804 "params": { 00:22:22.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.804 "namespace": { 00:22:22.804 "nsid": 1, 00:22:22.804 "bdev_name": "malloc0", 00:22:22.804 "nguid": "21211C0CD5964A3BB37274E054B8230A", 00:22:22.804 "uuid": "21211c0c-d596-4a3b-b372-74e054b8230a", 00:22:22.804 "no_auto_visible": false 00:22:22.804 } 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "nvmf_subsystem_add_listener", 00:22:22.804 "params": { 00:22:22.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.804 "listen_address": { 00:22:22.804 "trtype": "TCP", 00:22:22.804 "adrfam": "IPv4", 00:22:22.804 "traddr": "10.0.0.2", 00:22:22.804 "trsvcid": "4420" 00:22:22.804 }, 00:22:22.804 "secure_channel": true 00:22:22.804 } 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }' 00:22:22.804 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:22.804 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:22.804 "subsystems": [ 00:22:22.804 { 00:22:22.804 "subsystem": "keyring", 00:22:22.804 "config": [] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "iobuf", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "iobuf_set_options", 00:22:22.804 "params": { 00:22:22.804 "small_pool_count": 8192, 00:22:22.804 "large_pool_count": 1024, 00:22:22.804 "small_bufsize": 8192, 00:22:22.804 "large_bufsize": 135168 00:22:22.804 } 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "sock", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "sock_set_default_impl", 00:22:22.804 "params": { 00:22:22.804 "impl_name": "posix" 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "sock_impl_set_options", 00:22:22.804 "params": { 00:22:22.804 "impl_name": "ssl", 00:22:22.804 "recv_buf_size": 4096, 00:22:22.804 "send_buf_size": 4096, 00:22:22.804 "enable_recv_pipe": true, 00:22:22.804 "enable_quickack": false, 00:22:22.804 "enable_placement_id": 0, 00:22:22.804 "enable_zerocopy_send_server": true, 00:22:22.804 "enable_zerocopy_send_client": false, 00:22:22.804 "zerocopy_threshold": 0, 00:22:22.804 "tls_version": 0, 00:22:22.804 "enable_ktls": false 00:22:22.804 } 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "method": "sock_impl_set_options", 00:22:22.804 "params": { 00:22:22.804 "impl_name": "posix", 00:22:22.804 "recv_buf_size": 2097152, 00:22:22.804 "send_buf_size": 2097152, 00:22:22.804 "enable_recv_pipe": true, 00:22:22.804 "enable_quickack": false, 00:22:22.804 "enable_placement_id": 0, 00:22:22.804 "enable_zerocopy_send_server": true, 00:22:22.804 "enable_zerocopy_send_client": false, 00:22:22.804 "zerocopy_threshold": 0, 00:22:22.804 "tls_version": 0, 00:22:22.804 "enable_ktls": false 00:22:22.804 } 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "vmd", 00:22:22.804 "config": [] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "accel", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "accel_set_options", 00:22:22.804 "params": { 00:22:22.804 "small_cache_size": 128, 00:22:22.804 "large_cache_size": 16, 00:22:22.804 "task_count": 2048, 00:22:22.804 "sequence_count": 2048, 00:22:22.804 "buf_count": 2048 00:22:22.804 } 00:22:22.804 } 00:22:22.804 ] 00:22:22.804 }, 00:22:22.804 { 00:22:22.804 "subsystem": "bdev", 00:22:22.804 "config": [ 00:22:22.804 { 00:22:22.804 "method": "bdev_set_options", 00:22:22.805 "params": { 00:22:22.805 "bdev_io_pool_size": 65535, 00:22:22.805 "bdev_io_cache_size": 256, 00:22:22.805 "bdev_auto_examine": true, 00:22:22.805 "iobuf_small_cache_size": 128, 00:22:22.805 "iobuf_large_cache_size": 16 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_raid_set_options", 00:22:22.805 "params": { 00:22:22.805 "process_window_size_kb": 1024, 00:22:22.805 "process_max_bandwidth_mb_sec": 0 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_iscsi_set_options", 00:22:22.805 "params": { 00:22:22.805 "timeout_sec": 30 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_nvme_set_options", 00:22:22.805 "params": { 00:22:22.805 "action_on_timeout": "none", 00:22:22.805 "timeout_us": 0, 00:22:22.805 "timeout_admin_us": 0, 00:22:22.805 "keep_alive_timeout_ms": 10000, 00:22:22.805 "arbitration_burst": 0, 00:22:22.805 "low_priority_weight": 0, 00:22:22.805 "medium_priority_weight": 0, 00:22:22.805 "high_priority_weight": 0, 00:22:22.805 "nvme_adminq_poll_period_us": 10000, 00:22:22.805 "nvme_ioq_poll_period_us": 0, 00:22:22.805 "io_queue_requests": 512, 00:22:22.805 "delay_cmd_submit": true, 00:22:22.805 "transport_retry_count": 4, 00:22:22.805 "bdev_retry_count": 3, 00:22:22.805 "transport_ack_timeout": 0, 00:22:22.805 "ctrlr_loss_timeout_sec": 0, 00:22:22.805 "reconnect_delay_sec": 0, 00:22:22.805 "fast_io_fail_timeout_sec": 0, 00:22:22.805 "disable_auto_failback": false, 00:22:22.805 "generate_uuids": false, 00:22:22.805 "transport_tos": 0, 00:22:22.805 "nvme_error_stat": false, 00:22:22.805 "rdma_srq_size": 0, 00:22:22.805 "io_path_stat": false, 00:22:22.805 "allow_accel_sequence": false, 00:22:22.805 "rdma_max_cq_size": 0, 00:22:22.805 "rdma_cm_event_timeout_ms": 0, 00:22:22.805 "dhchap_digests": [ 00:22:22.805 "sha256", 00:22:22.805 "sha384", 00:22:22.805 "sha512" 00:22:22.805 ], 00:22:22.805 "dhchap_dhgroups": [ 00:22:22.805 "null", 00:22:22.805 "ffdhe2048", 00:22:22.805 "ffdhe3072", 00:22:22.805 "ffdhe4096", 00:22:22.805 "ffdhe6144", 00:22:22.805 "ffdhe8192" 00:22:22.805 ] 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_nvme_attach_controller", 00:22:22.805 "params": { 00:22:22.805 "name": "TLSTEST", 00:22:22.805 "trtype": "TCP", 00:22:22.805 "adrfam": "IPv4", 00:22:22.805 "traddr": "10.0.0.2", 00:22:22.805 "trsvcid": "4420", 00:22:22.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.805 "prchk_reftag": false, 00:22:22.805 "prchk_guard": false, 00:22:22.805 "ctrlr_loss_timeout_sec": 0, 00:22:22.805 "reconnect_delay_sec": 0, 00:22:22.805 "fast_io_fail_timeout_sec": 0, 00:22:22.805 "psk": "/tmp/tmp.SZd6Lb4Q9X", 00:22:22.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.805 "hdgst": false, 00:22:22.805 "ddgst": false 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_nvme_set_hotplug", 00:22:22.805 "params": { 00:22:22.805 "period_us": 100000, 00:22:22.805 "enable": false 00:22:22.805 } 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "method": "bdev_wait_for_examine" 00:22:22.805 } 00:22:22.805 ] 00:22:22.805 }, 00:22:22.805 { 00:22:22.805 "subsystem": "nbd", 00:22:22.805 "config": [] 00:22:22.805 } 00:22:22.805 ] 00:22:22.805 }' 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 867415 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 867415 ']' 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 867415 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.805 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867415 00:22:23.063 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:23.063 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:23.063 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867415' 00:22:23.063 killing process with pid 867415 00:22:23.063 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 867415 00:22:23.063 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.063 00:22:23.063 Latency(us) 00:22:23.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.063 =================================================================================================================== 00:22:23.063 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.063 [2024-07-25 04:05:38.120449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 867415 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 867131 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 867131 ']' 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 867131 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.064 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867131 00:22:23.321 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.321 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.321 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867131' 00:22:23.321 killing process with pid 867131 00:22:23.321 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 867131 00:22:23.321 [2024-07-25 04:05:38.371802] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:23.321 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 867131 00:22:23.580 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:23.580 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.580 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:23.580 "subsystems": [ 00:22:23.580 { 00:22:23.580 "subsystem": "keyring", 00:22:23.580 "config": [] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "iobuf", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.580 "method": "iobuf_set_options", 00:22:23.580 "params": { 00:22:23.580 "small_pool_count": 8192, 00:22:23.580 "large_pool_count": 1024, 00:22:23.580 "small_bufsize": 8192, 00:22:23.580 "large_bufsize": 135168 00:22:23.580 } 00:22:23.580 } 00:22:23.580 ] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "sock", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.580 "method": "sock_set_default_impl", 00:22:23.580 "params": { 00:22:23.580 "impl_name": "posix" 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "sock_impl_set_options", 00:22:23.580 "params": { 00:22:23.580 "impl_name": "ssl", 00:22:23.580 "recv_buf_size": 4096, 00:22:23.580 "send_buf_size": 4096, 00:22:23.580 "enable_recv_pipe": true, 00:22:23.580 "enable_quickack": false, 00:22:23.580 "enable_placement_id": 0, 00:22:23.580 "enable_zerocopy_send_server": true, 00:22:23.580 "enable_zerocopy_send_client": false, 00:22:23.580 "zerocopy_threshold": 0, 00:22:23.580 "tls_version": 0, 00:22:23.580 "enable_ktls": false 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "sock_impl_set_options", 00:22:23.580 "params": { 00:22:23.580 "impl_name": "posix", 00:22:23.580 "recv_buf_size": 2097152, 00:22:23.580 "send_buf_size": 2097152, 00:22:23.580 "enable_recv_pipe": true, 00:22:23.580 "enable_quickack": false, 00:22:23.580 "enable_placement_id": 0, 00:22:23.580 "enable_zerocopy_send_server": true, 00:22:23.580 "enable_zerocopy_send_client": false, 00:22:23.580 "zerocopy_threshold": 0, 00:22:23.580 "tls_version": 0, 00:22:23.580 "enable_ktls": false 00:22:23.580 } 00:22:23.580 } 00:22:23.580 ] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "vmd", 00:22:23.580 "config": [] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "accel", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.580 "method": "accel_set_options", 00:22:23.580 "params": { 00:22:23.580 "small_cache_size": 128, 00:22:23.580 "large_cache_size": 16, 00:22:23.580 "task_count": 2048, 00:22:23.580 "sequence_count": 2048, 00:22:23.580 "buf_count": 2048 00:22:23.580 } 00:22:23.580 } 00:22:23.580 ] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "bdev", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.580 "method": "bdev_set_options", 00:22:23.580 "params": { 00:22:23.580 "bdev_io_pool_size": 65535, 00:22:23.580 "bdev_io_cache_size": 256, 00:22:23.580 "bdev_auto_examine": true, 00:22:23.580 "iobuf_small_cache_size": 128, 00:22:23.580 "iobuf_large_cache_size": 16 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_raid_set_options", 00:22:23.580 "params": { 00:22:23.580 "process_window_size_kb": 1024, 00:22:23.580 "process_max_bandwidth_mb_sec": 0 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_iscsi_set_options", 00:22:23.580 "params": { 00:22:23.580 "timeout_sec": 30 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_nvme_set_options", 00:22:23.580 "params": { 00:22:23.580 "action_on_timeout": "none", 00:22:23.580 "timeout_us": 0, 00:22:23.580 "timeout_admin_us": 0, 00:22:23.580 "keep_alive_timeout_ms": 10000, 00:22:23.580 "arbitration_burst": 0, 00:22:23.580 "low_priority_weight": 0, 00:22:23.580 "medium_priority_weight": 0, 00:22:23.580 "high_priority_weight": 0, 00:22:23.580 "nvme_adminq_poll_period_us": 10000, 00:22:23.580 "nvme_ioq_poll_period_us": 0, 00:22:23.580 "io_queue_requests": 0, 00:22:23.580 "delay_cmd_submit": true, 00:22:23.580 "transport_retry_count": 4, 00:22:23.580 "bdev_retry_count": 3, 00:22:23.580 "transport_ack_timeout": 0, 00:22:23.580 "ctrlr_loss_timeout_sec": 0, 00:22:23.580 "reconnect_delay_sec": 0, 00:22:23.580 "fast_io_fail_timeout_sec": 0, 00:22:23.580 "disable_auto_failback": false, 00:22:23.580 "generate_uuids": false, 00:22:23.580 "transport_tos": 0, 00:22:23.580 "nvme_error_stat": false, 00:22:23.580 "rdma_srq_size": 0, 00:22:23.580 "io_path_stat": false, 00:22:23.580 "allow_accel_sequence": false, 00:22:23.580 "rdma_max_cq_size": 0, 00:22:23.580 "rdma_cm_event_timeout_ms": 0, 00:22:23.580 "dhchap_digests": [ 00:22:23.580 "sha256", 00:22:23.580 "sha384", 00:22:23.580 "sha512" 00:22:23.580 ], 00:22:23.580 "dhchap_dhgroups": [ 00:22:23.580 "null", 00:22:23.580 "ffdhe2048", 00:22:23.580 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.580 "ffdhe3072", 00:22:23.580 "ffdhe4096", 00:22:23.580 "ffdhe6144", 00:22:23.580 "ffdhe8192" 00:22:23.580 ] 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_nvme_set_hotplug", 00:22:23.580 "params": { 00:22:23.580 "period_us": 100000, 00:22:23.580 "enable": false 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_malloc_create", 00:22:23.580 "params": { 00:22:23.580 "name": "malloc0", 00:22:23.580 "num_blocks": 8192, 00:22:23.580 "block_size": 4096, 00:22:23.580 "physical_block_size": 4096, 00:22:23.580 "uuid": "21211c0c-d596-4a3b-b372-74e054b8230a", 00:22:23.580 "optimal_io_boundary": 0, 00:22:23.580 "md_size": 0, 00:22:23.580 "dif_type": 0, 00:22:23.580 "dif_is_head_of_md": false, 00:22:23.580 "dif_pi_format": 0 00:22:23.580 } 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "method": "bdev_wait_for_examine" 00:22:23.580 } 00:22:23.580 ] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "nbd", 00:22:23.580 "config": [] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "scheduler", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.580 "method": "framework_set_scheduler", 00:22:23.580 "params": { 00:22:23.580 "name": "static" 00:22:23.580 } 00:22:23.580 } 00:22:23.580 ] 00:22:23.580 }, 00:22:23.580 { 00:22:23.580 "subsystem": "nvmf", 00:22:23.580 "config": [ 00:22:23.580 { 00:22:23.581 "method": "nvmf_set_config", 00:22:23.581 "params": { 00:22:23.581 "discovery_filter": "match_any", 00:22:23.581 "admin_cmd_passthru": { 00:22:23.581 "identify_ctrlr": false 00:22:23.581 } 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_set_max_subsystems", 00:22:23.581 "params": { 00:22:23.581 "max_subsystems": 1024 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_set_crdt", 00:22:23.581 "params": { 00:22:23.581 "crdt1": 0, 00:22:23.581 "crdt2": 0, 00:22:23.581 "crdt3": 0 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_create_transport", 00:22:23.581 "params": { 00:22:23.581 "trtype": "TCP", 00:22:23.581 "max_queue_depth": 128, 00:22:23.581 "max_io_qpairs_per_ctrlr": 127, 00:22:23.581 "in_capsule_data_size": 4096, 00:22:23.581 "max_io_size": 131072, 00:22:23.581 "io_unit_size": 131072, 00:22:23.581 "max_aq_depth": 128, 00:22:23.581 "num_shared_buffers": 511, 00:22:23.581 "buf_cache_size": 4294967295, 00:22:23.581 "dif_insert_or_strip": false, 00:22:23.581 "zcopy": false, 00:22:23.581 "c2h_success": false, 00:22:23.581 "sock_priority": 0, 00:22:23.581 "abort_timeout_sec": 1, 00:22:23.581 "ack_timeout": 0, 00:22:23.581 "data_wr_pool_size": 0 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_create_subsystem", 00:22:23.581 "params": { 00:22:23.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.581 "allow_any_host": false, 00:22:23.581 "serial_number": "SPDK00000000000001", 00:22:23.581 "model_number": "SPDK bdev Controller", 00:22:23.581 "max_namespaces": 10, 00:22:23.581 "min_cntlid": 1, 00:22:23.581 "max_cntlid": 65519, 00:22:23.581 "ana_reporting": false 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_subsystem_add_host", 00:22:23.581 "params": { 00:22:23.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.581 "host": "nqn.2016-06.io.spdk:host1", 00:22:23.581 "psk": "/tmp/tmp.SZd6Lb4Q9X" 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_subsystem_add_ns", 00:22:23.581 "params": { 00:22:23.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.581 "namespace": { 00:22:23.581 "nsid": 1, 00:22:23.581 "bdev_name": "malloc0", 00:22:23.581 "nguid": "21211C0CD5964A3BB37274E054B8230A", 00:22:23.581 "uuid": "21211c0c-d596-4a3b-b372-74e054b8230a", 00:22:23.581 "no_auto_visible": false 00:22:23.581 } 00:22:23.581 } 00:22:23.581 }, 00:22:23.581 { 00:22:23.581 "method": "nvmf_subsystem_add_listener", 00:22:23.581 "params": { 00:22:23.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.581 "listen_address": { 00:22:23.581 "trtype": "TCP", 00:22:23.581 "adrfam": "IPv4", 00:22:23.581 "traddr": "10.0.0.2", 00:22:23.581 "trsvcid": "4420" 00:22:23.581 }, 00:22:23.581 "secure_channel": true 00:22:23.581 } 00:22:23.581 } 00:22:23.581 ] 00:22:23.581 } 00:22:23.581 ] 00:22:23.581 }' 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=867574 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 867574 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 867574 ']' 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.581 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 [2024-07-25 04:05:38.681495] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:23.581 [2024-07-25 04:05:38.681606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.581 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.581 [2024-07-25 04:05:38.718539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:23.581 [2024-07-25 04:05:38.749455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.581 [2024-07-25 04:05:38.843349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.581 [2024-07-25 04:05:38.843406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.581 [2024-07-25 04:05:38.843428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.581 [2024-07-25 04:05:38.843440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.581 [2024-07-25 04:05:38.843451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.581 [2024-07-25 04:05:38.843541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.839 [2024-07-25 04:05:39.070445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.839 [2024-07-25 04:05:39.099024] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.839 [2024-07-25 04:05:39.115093] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.839 [2024-07-25 04:05:39.115352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=867726 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 867726 /var/tmp/bdevperf.sock 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 867726 ']' 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.404 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:24.404 "subsystems": [ 00:22:24.404 { 00:22:24.404 "subsystem": "keyring", 00:22:24.404 "config": [] 00:22:24.404 }, 00:22:24.404 { 00:22:24.404 "subsystem": "iobuf", 00:22:24.404 "config": [ 00:22:24.404 { 00:22:24.404 "method": "iobuf_set_options", 00:22:24.404 "params": { 00:22:24.404 "small_pool_count": 8192, 00:22:24.404 "large_pool_count": 1024, 00:22:24.404 "small_bufsize": 8192, 00:22:24.404 "large_bufsize": 135168 00:22:24.404 } 00:22:24.404 } 00:22:24.404 ] 00:22:24.404 }, 00:22:24.404 { 00:22:24.404 "subsystem": "sock", 00:22:24.404 "config": [ 00:22:24.404 { 00:22:24.404 "method": "sock_set_default_impl", 00:22:24.404 "params": { 00:22:24.404 "impl_name": "posix" 00:22:24.404 } 00:22:24.404 }, 00:22:24.404 { 00:22:24.404 "method": "sock_impl_set_options", 00:22:24.404 "params": { 00:22:24.404 "impl_name": "ssl", 00:22:24.404 "recv_buf_size": 4096, 00:22:24.404 "send_buf_size": 4096, 00:22:24.404 "enable_recv_pipe": true, 00:22:24.404 "enable_quickack": false, 00:22:24.404 "enable_placement_id": 0, 00:22:24.404 "enable_zerocopy_send_server": true, 00:22:24.404 "enable_zerocopy_send_client": false, 00:22:24.404 "zerocopy_threshold": 0, 00:22:24.404 "tls_version": 0, 00:22:24.404 "enable_ktls": false 00:22:24.404 } 00:22:24.404 }, 00:22:24.404 { 00:22:24.404 "method": "sock_impl_set_options", 00:22:24.404 "params": { 00:22:24.404 "impl_name": "posix", 00:22:24.404 "recv_buf_size": 2097152, 00:22:24.404 "send_buf_size": 2097152, 00:22:24.404 "enable_recv_pipe": true, 00:22:24.404 "enable_quickack": false, 00:22:24.404 "enable_placement_id": 0, 00:22:24.404 "enable_zerocopy_send_server": true, 00:22:24.404 "enable_zerocopy_send_client": false, 00:22:24.404 "zerocopy_threshold": 0, 00:22:24.404 "tls_version": 0, 00:22:24.405 "enable_ktls": false 00:22:24.405 } 00:22:24.405 } 00:22:24.405 ] 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "subsystem": "vmd", 00:22:24.405 "config": [] 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "subsystem": "accel", 00:22:24.405 "config": [ 00:22:24.405 { 00:22:24.405 "method": "accel_set_options", 00:22:24.405 "params": { 00:22:24.405 "small_cache_size": 128, 00:22:24.405 "large_cache_size": 16, 00:22:24.405 "task_count": 2048, 00:22:24.405 "sequence_count": 2048, 00:22:24.405 "buf_count": 2048 00:22:24.405 } 00:22:24.405 } 00:22:24.405 ] 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "subsystem": "bdev", 00:22:24.405 "config": [ 00:22:24.405 { 00:22:24.405 "method": "bdev_set_options", 00:22:24.405 "params": { 00:22:24.405 "bdev_io_pool_size": 65535, 00:22:24.405 "bdev_io_cache_size": 256, 00:22:24.405 "bdev_auto_examine": true, 00:22:24.405 "iobuf_small_cache_size": 128, 00:22:24.405 "iobuf_large_cache_size": 16 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_raid_set_options", 00:22:24.405 "params": { 00:22:24.405 "process_window_size_kb": 1024, 00:22:24.405 "process_max_bandwidth_mb_sec": 0 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_iscsi_set_options", 00:22:24.405 "params": { 00:22:24.405 "timeout_sec": 30 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_nvme_set_options", 00:22:24.405 "params": { 00:22:24.405 "action_on_timeout": "none", 00:22:24.405 "timeout_us": 0, 00:22:24.405 "timeout_admin_us": 0, 00:22:24.405 "keep_alive_timeout_ms": 10000, 00:22:24.405 "arbitration_burst": 0, 00:22:24.405 "low_priority_weight": 0, 00:22:24.405 "medium_priority_weight": 0, 00:22:24.405 "high_priority_weight": 0, 00:22:24.405 "nvme_adminq_poll_period_us": 10000, 00:22:24.405 "nvme_ioq_poll_period_us": 0, 00:22:24.405 "io_queue_requests": 512, 00:22:24.405 "delay_cmd_submit": true, 00:22:24.405 "transport_retry_count": 4, 00:22:24.405 "bdev_retry_count": 3, 00:22:24.405 "transport_ack_timeout": 0, 00:22:24.405 "ctrlr_loss_timeout_sec": 0, 00:22:24.405 "reconnect_delay_sec": 0, 00:22:24.405 "fast_io_fail_timeout_sec": 0, 00:22:24.405 "disable_auto_failback": false, 00:22:24.405 "generate_uuids": false, 00:22:24.405 "transport_tos": 0, 00:22:24.405 "nvme_error_stat": false, 00:22:24.405 "rdma_srq_size": 0, 00:22:24.405 "io_path_stat": false, 00:22:24.405 "allow_accel_sequence": false, 00:22:24.405 "rdma_max_cq_size": 0, 00:22:24.405 "rdma_cm_event_timeout_ms": 0, 00:22:24.405 "dhchap_digests": [ 00:22:24.405 "sha256", 00:22:24.405 "sha384", 00:22:24.405 "sha512" 00:22:24.405 ], 00:22:24.405 "dhchap_dhgroups": [ 00:22:24.405 "null", 00:22:24.405 "ffdhe2048", 00:22:24.405 "ffdhe3072", 00:22:24.405 "ffdhe4096", 00:22:24.405 "ffdhe6144", 00:22:24.405 "ffdhe8192" 00:22:24.405 ] 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_nvme_attach_controller", 00:22:24.405 "params": { 00:22:24.405 "name": "TLSTEST", 00:22:24.405 "trtype": "TCP", 00:22:24.405 "adrfam": "IPv4", 00:22:24.405 "traddr": "10.0.0.2", 00:22:24.405 "trsvcid": "4420", 00:22:24.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.405 "prchk_reftag": false, 00:22:24.405 "prchk_guard": false, 00:22:24.405 "ctrlr_loss_timeout_sec": 0, 00:22:24.405 "reconnect_delay_sec": 0, 00:22:24.405 "fast_io_fail_timeout_sec": 0, 00:22:24.405 "psk": "/tmp/tmp.SZd6Lb4Q9X", 00:22:24.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.405 "hdgst": false, 00:22:24.405 "ddgst": false 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_nvme_set_hotplug", 00:22:24.405 "params": { 00:22:24.405 "period_us": 100000, 00:22:24.405 "enable": false 00:22:24.405 } 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "method": "bdev_wait_for_examine" 00:22:24.405 } 00:22:24.405 ] 00:22:24.405 }, 00:22:24.405 { 00:22:24.405 "subsystem": "nbd", 00:22:24.405 "config": [] 00:22:24.405 } 00:22:24.405 ] 00:22:24.405 }' 00:22:24.405 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.405 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.405 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.663 [2024-07-25 04:05:39.707637] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:24.663 [2024-07-25 04:05:39.707735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867726 ] 00:22:24.663 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.663 [2024-07-25 04:05:39.739416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:24.664 [2024-07-25 04:05:39.766283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.664 [2024-07-25 04:05:39.849534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.921 [2024-07-25 04:05:40.015988] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.921 [2024-07-25 04:05:40.016121] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:25.486 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.487 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:25.487 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:25.487 Running I/O for 10 seconds... 00:22:37.678 00:22:37.678 Latency(us) 00:22:37.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.678 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.678 Verification LBA range: start 0x0 length 0x2000 00:22:37.678 TLSTESTn1 : 10.04 3358.18 13.12 0.00 0.00 38018.17 6165.24 55147.33 00:22:37.678 =================================================================================================================== 00:22:37.678 Total : 3358.18 13.12 0.00 0.00 38018.17 6165.24 55147.33 00:22:37.678 0 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 867726 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 867726 ']' 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 867726 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867726 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867726' 00:22:37.678 killing process with pid 867726 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 867726 00:22:37.678 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.678 00:22:37.678 Latency(us) 00:22:37.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.678 =================================================================================================================== 00:22:37.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.678 [2024-07-25 04:05:50.896385] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.678 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 867726 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 867574 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 867574 ']' 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 867574 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867574 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867574' 00:22:37.678 killing process with pid 867574 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 867574 00:22:37.678 [2024-07-25 04:05:51.151580] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 867574 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=869140 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 869140 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 869140 ']' 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.678 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.679 [2024-07-25 04:05:51.452215] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:37.679 [2024-07-25 04:05:51.452340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.679 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.679 [2024-07-25 04:05:51.491908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:37.679 [2024-07-25 04:05:51.522863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.679 [2024-07-25 04:05:51.620179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.679 [2024-07-25 04:05:51.620239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.679 [2024-07-25 04:05:51.620277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.679 [2024-07-25 04:05:51.620293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.679 [2024-07-25 04:05:51.620305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.679 [2024-07-25 04:05:51.620345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.SZd6Lb4Q9X 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SZd6Lb4Q9X 00:22:37.679 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:37.679 [2024-07-25 04:05:51.987530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.679 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:37.679 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:37.679 [2024-07-25 04:05:52.569155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.679 [2024-07-25 04:05:52.569440] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.679 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:37.679 malloc0 00:22:37.679 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.935 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZd6Lb4Q9X 00:22:38.192 [2024-07-25 04:05:53.359544] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=869348 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 869348 /var/tmp/bdevperf.sock 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 869348 ']' 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.192 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.192 [2024-07-25 04:05:53.424887] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:38.192 [2024-07-25 04:05:53.424977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869348 ] 00:22:38.192 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.192 [2024-07-25 04:05:53.458053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.192 [2024-07-25 04:05:53.488596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.449 [2024-07-25 04:05:53.579320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.449 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.449 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:38.449 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SZd6Lb4Q9X 00:22:38.706 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:38.963 [2024-07-25 04:05:54.176239] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.221 nvme0n1 00:22:39.221 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.221 Running I/O for 1 seconds... 00:22:40.150 00:22:40.151 Latency(us) 00:22:40.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.151 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:40.151 Verification LBA range: start 0x0 length 0x2000 00:22:40.151 nvme0n1 : 1.02 3047.17 11.90 0.00 0.00 41568.44 6941.96 75342.13 00:22:40.151 =================================================================================================================== 00:22:40.151 Total : 3047.17 11.90 0.00 0.00 41568.44 6941.96 75342.13 00:22:40.151 0 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 869348 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 869348 ']' 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 869348 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.151 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 869348 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 869348' 00:22:40.409 killing process with pid 869348 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 869348 00:22:40.409 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.409 00:22:40.409 Latency(us) 00:22:40.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.409 =================================================================================================================== 00:22:40.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 869348 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 869140 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 869140 ']' 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 869140 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:40.409 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 869140 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 869140' 00:22:40.666 killing process with pid 869140 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 869140 00:22:40.666 [2024-07-25 04:05:55.731842] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:40.666 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 869140 00:22:40.923 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:40.923 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.923 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.923 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.923 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=869725 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 869725 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 869725 ']' 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.924 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.924 [2024-07-25 04:05:56.019357] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:40.924 [2024-07-25 04:05:56.019451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.924 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.924 [2024-07-25 04:05:56.056934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.924 [2024-07-25 04:05:56.085201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.924 [2024-07-25 04:05:56.176334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.924 [2024-07-25 04:05:56.176392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.924 [2024-07-25 04:05:56.176417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.924 [2024-07-25 04:05:56.176431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.924 [2024-07-25 04:05:56.176452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.924 [2024-07-25 04:05:56.176481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.181 [2024-07-25 04:05:56.323292] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.181 malloc0 00:22:41.181 [2024-07-25 04:05:56.356214] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.181 [2024-07-25 04:05:56.368483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=869773 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 869773 /var/tmp/bdevperf.sock 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 869773 ']' 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.181 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.181 [2024-07-25 04:05:56.435911] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:41.182 [2024-07-25 04:05:56.435972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869773 ] 00:22:41.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.182 [2024-07-25 04:05:56.468068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:41.467 [2024-07-25 04:05:56.500346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.467 [2024-07-25 04:05:56.591358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.467 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.467 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.467 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SZd6Lb4Q9X 00:22:41.724 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:41.980 [2024-07-25 04:05:57.248439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.238 nvme0n1 00:22:42.238 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.238 Running I/O for 1 seconds... 00:22:43.608 00:22:43.608 Latency(us) 00:22:43.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.608 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:43.608 Verification LBA range: start 0x0 length 0x2000 00:22:43.608 nvme0n1 : 1.04 3156.93 12.33 0.00 0.00 39780.04 7718.68 58642.58 00:22:43.608 =================================================================================================================== 00:22:43.608 Total : 3156.93 12.33 0.00 0.00 39780.04 7718.68 58642.58 00:22:43.608 0 00:22:43.608 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:43.608 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.608 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.608 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.608 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:43.608 "subsystems": [ 00:22:43.608 { 00:22:43.608 "subsystem": "keyring", 00:22:43.608 "config": [ 00:22:43.608 { 00:22:43.608 "method": "keyring_file_add_key", 00:22:43.608 "params": { 00:22:43.608 "name": "key0", 00:22:43.608 "path": "/tmp/tmp.SZd6Lb4Q9X" 00:22:43.608 } 00:22:43.608 } 00:22:43.608 ] 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "subsystem": "iobuf", 00:22:43.608 "config": [ 00:22:43.608 { 00:22:43.608 "method": "iobuf_set_options", 00:22:43.608 "params": { 00:22:43.608 "small_pool_count": 8192, 00:22:43.608 "large_pool_count": 1024, 00:22:43.608 "small_bufsize": 8192, 00:22:43.608 "large_bufsize": 135168 00:22:43.608 } 00:22:43.608 } 00:22:43.608 ] 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "subsystem": "sock", 00:22:43.608 "config": [ 00:22:43.608 { 00:22:43.608 "method": "sock_set_default_impl", 00:22:43.608 "params": { 00:22:43.608 "impl_name": "posix" 00:22:43.608 } 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "method": "sock_impl_set_options", 00:22:43.608 "params": { 00:22:43.608 "impl_name": "ssl", 00:22:43.608 "recv_buf_size": 4096, 00:22:43.608 "send_buf_size": 4096, 00:22:43.608 "enable_recv_pipe": true, 00:22:43.608 "enable_quickack": false, 00:22:43.608 "enable_placement_id": 0, 00:22:43.608 "enable_zerocopy_send_server": true, 00:22:43.608 "enable_zerocopy_send_client": false, 00:22:43.608 "zerocopy_threshold": 0, 00:22:43.608 "tls_version": 0, 00:22:43.608 "enable_ktls": false 00:22:43.608 } 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "method": "sock_impl_set_options", 00:22:43.608 "params": { 00:22:43.608 "impl_name": "posix", 00:22:43.608 "recv_buf_size": 2097152, 00:22:43.608 "send_buf_size": 2097152, 00:22:43.609 "enable_recv_pipe": true, 00:22:43.609 "enable_quickack": false, 00:22:43.609 "enable_placement_id": 0, 00:22:43.609 "enable_zerocopy_send_server": true, 00:22:43.609 "enable_zerocopy_send_client": false, 00:22:43.609 "zerocopy_threshold": 0, 00:22:43.609 "tls_version": 0, 00:22:43.609 "enable_ktls": false 00:22:43.609 } 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "vmd", 00:22:43.609 "config": [] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "accel", 00:22:43.609 "config": [ 00:22:43.609 { 00:22:43.609 "method": "accel_set_options", 00:22:43.609 "params": { 00:22:43.609 "small_cache_size": 128, 00:22:43.609 "large_cache_size": 16, 00:22:43.609 "task_count": 2048, 00:22:43.609 "sequence_count": 2048, 00:22:43.609 "buf_count": 2048 00:22:43.609 } 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "bdev", 00:22:43.609 "config": [ 00:22:43.609 { 00:22:43.609 "method": "bdev_set_options", 00:22:43.609 "params": { 00:22:43.609 "bdev_io_pool_size": 65535, 00:22:43.609 "bdev_io_cache_size": 256, 00:22:43.609 "bdev_auto_examine": true, 00:22:43.609 "iobuf_small_cache_size": 128, 00:22:43.609 "iobuf_large_cache_size": 16 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_raid_set_options", 00:22:43.609 "params": { 00:22:43.609 "process_window_size_kb": 1024, 00:22:43.609 "process_max_bandwidth_mb_sec": 0 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_iscsi_set_options", 00:22:43.609 "params": { 00:22:43.609 "timeout_sec": 30 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_nvme_set_options", 00:22:43.609 "params": { 00:22:43.609 "action_on_timeout": "none", 00:22:43.609 "timeout_us": 0, 00:22:43.609 "timeout_admin_us": 0, 00:22:43.609 "keep_alive_timeout_ms": 10000, 00:22:43.609 "arbitration_burst": 0, 00:22:43.609 "low_priority_weight": 0, 00:22:43.609 "medium_priority_weight": 0, 00:22:43.609 "high_priority_weight": 0, 00:22:43.609 "nvme_adminq_poll_period_us": 10000, 00:22:43.609 "nvme_ioq_poll_period_us": 0, 00:22:43.609 "io_queue_requests": 0, 00:22:43.609 "delay_cmd_submit": true, 00:22:43.609 "transport_retry_count": 4, 00:22:43.609 "bdev_retry_count": 3, 00:22:43.609 "transport_ack_timeout": 0, 00:22:43.609 "ctrlr_loss_timeout_sec": 0, 00:22:43.609 "reconnect_delay_sec": 0, 00:22:43.609 "fast_io_fail_timeout_sec": 0, 00:22:43.609 "disable_auto_failback": false, 00:22:43.609 "generate_uuids": false, 00:22:43.609 "transport_tos": 0, 00:22:43.609 "nvme_error_stat": false, 00:22:43.609 "rdma_srq_size": 0, 00:22:43.609 "io_path_stat": false, 00:22:43.609 "allow_accel_sequence": false, 00:22:43.609 "rdma_max_cq_size": 0, 00:22:43.609 "rdma_cm_event_timeout_ms": 0, 00:22:43.609 "dhchap_digests": [ 00:22:43.609 "sha256", 00:22:43.609 "sha384", 00:22:43.609 "sha512" 00:22:43.609 ], 00:22:43.609 "dhchap_dhgroups": [ 00:22:43.609 "null", 00:22:43.609 "ffdhe2048", 00:22:43.609 "ffdhe3072", 00:22:43.609 "ffdhe4096", 00:22:43.609 "ffdhe6144", 00:22:43.609 "ffdhe8192" 00:22:43.609 ] 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_nvme_set_hotplug", 00:22:43.609 "params": { 00:22:43.609 "period_us": 100000, 00:22:43.609 "enable": false 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_malloc_create", 00:22:43.609 "params": { 00:22:43.609 "name": "malloc0", 00:22:43.609 "num_blocks": 8192, 00:22:43.609 "block_size": 4096, 00:22:43.609 "physical_block_size": 4096, 00:22:43.609 "uuid": "ea89dc0d-eff0-48b9-9193-ac428cfd7983", 00:22:43.609 "optimal_io_boundary": 0, 00:22:43.609 "md_size": 0, 00:22:43.609 "dif_type": 0, 00:22:43.609 "dif_is_head_of_md": false, 00:22:43.609 "dif_pi_format": 0 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "bdev_wait_for_examine" 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "nbd", 00:22:43.609 "config": [] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "scheduler", 00:22:43.609 "config": [ 00:22:43.609 { 00:22:43.609 "method": "framework_set_scheduler", 00:22:43.609 "params": { 00:22:43.609 "name": "static" 00:22:43.609 } 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "subsystem": "nvmf", 00:22:43.609 "config": [ 00:22:43.609 { 00:22:43.609 "method": "nvmf_set_config", 00:22:43.609 "params": { 00:22:43.609 "discovery_filter": "match_any", 00:22:43.609 "admin_cmd_passthru": { 00:22:43.609 "identify_ctrlr": false 00:22:43.609 } 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_set_max_subsystems", 00:22:43.609 "params": { 00:22:43.609 "max_subsystems": 1024 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_set_crdt", 00:22:43.609 "params": { 00:22:43.609 "crdt1": 0, 00:22:43.609 "crdt2": 0, 00:22:43.609 "crdt3": 0 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_create_transport", 00:22:43.609 "params": { 00:22:43.609 "trtype": "TCP", 00:22:43.609 "max_queue_depth": 128, 00:22:43.609 "max_io_qpairs_per_ctrlr": 127, 00:22:43.609 "in_capsule_data_size": 4096, 00:22:43.609 "max_io_size": 131072, 00:22:43.609 "io_unit_size": 131072, 00:22:43.609 "max_aq_depth": 128, 00:22:43.609 "num_shared_buffers": 511, 00:22:43.609 "buf_cache_size": 4294967295, 00:22:43.609 "dif_insert_or_strip": false, 00:22:43.609 "zcopy": false, 00:22:43.609 "c2h_success": false, 00:22:43.609 "sock_priority": 0, 00:22:43.609 "abort_timeout_sec": 1, 00:22:43.609 "ack_timeout": 0, 00:22:43.609 "data_wr_pool_size": 0 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_create_subsystem", 00:22:43.609 "params": { 00:22:43.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.609 "allow_any_host": false, 00:22:43.609 "serial_number": "00000000000000000000", 00:22:43.609 "model_number": "SPDK bdev Controller", 00:22:43.609 "max_namespaces": 32, 00:22:43.609 "min_cntlid": 1, 00:22:43.609 "max_cntlid": 65519, 00:22:43.609 "ana_reporting": false 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_subsystem_add_host", 00:22:43.609 "params": { 00:22:43.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.609 "host": "nqn.2016-06.io.spdk:host1", 00:22:43.609 "psk": "key0" 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_subsystem_add_ns", 00:22:43.609 "params": { 00:22:43.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.609 "namespace": { 00:22:43.609 "nsid": 1, 00:22:43.609 "bdev_name": "malloc0", 00:22:43.609 "nguid": "EA89DC0DEFF048B99193AC428CFD7983", 00:22:43.609 "uuid": "ea89dc0d-eff0-48b9-9193-ac428cfd7983", 00:22:43.609 "no_auto_visible": false 00:22:43.609 } 00:22:43.609 } 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "method": "nvmf_subsystem_add_listener", 00:22:43.609 "params": { 00:22:43.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.609 "listen_address": { 00:22:43.609 "trtype": "TCP", 00:22:43.609 "adrfam": "IPv4", 00:22:43.609 "traddr": "10.0.0.2", 00:22:43.609 "trsvcid": "4420" 00:22:43.609 }, 00:22:43.609 "secure_channel": false, 00:22:43.609 "sock_impl": "ssl" 00:22:43.609 } 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 }' 00:22:43.609 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:43.867 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:43.867 "subsystems": [ 00:22:43.867 { 00:22:43.867 "subsystem": "keyring", 00:22:43.867 "config": [ 00:22:43.867 { 00:22:43.867 "method": "keyring_file_add_key", 00:22:43.867 "params": { 00:22:43.867 "name": "key0", 00:22:43.867 "path": "/tmp/tmp.SZd6Lb4Q9X" 00:22:43.867 } 00:22:43.867 } 00:22:43.867 ] 00:22:43.867 }, 00:22:43.867 { 00:22:43.867 "subsystem": "iobuf", 00:22:43.867 "config": [ 00:22:43.867 { 00:22:43.867 "method": "iobuf_set_options", 00:22:43.867 "params": { 00:22:43.867 "small_pool_count": 8192, 00:22:43.867 "large_pool_count": 1024, 00:22:43.867 "small_bufsize": 8192, 00:22:43.867 "large_bufsize": 135168 00:22:43.867 } 00:22:43.867 } 00:22:43.867 ] 00:22:43.867 }, 00:22:43.867 { 00:22:43.867 "subsystem": "sock", 00:22:43.867 "config": [ 00:22:43.867 { 00:22:43.867 "method": "sock_set_default_impl", 00:22:43.867 "params": { 00:22:43.867 "impl_name": "posix" 00:22:43.867 } 00:22:43.867 }, 00:22:43.867 { 00:22:43.867 "method": "sock_impl_set_options", 00:22:43.867 "params": { 00:22:43.867 "impl_name": "ssl", 00:22:43.867 "recv_buf_size": 4096, 00:22:43.867 "send_buf_size": 4096, 00:22:43.867 "enable_recv_pipe": true, 00:22:43.867 "enable_quickack": false, 00:22:43.867 "enable_placement_id": 0, 00:22:43.867 "enable_zerocopy_send_server": true, 00:22:43.867 "enable_zerocopy_send_client": false, 00:22:43.867 "zerocopy_threshold": 0, 00:22:43.867 "tls_version": 0, 00:22:43.867 "enable_ktls": false 00:22:43.867 } 00:22:43.867 }, 00:22:43.867 { 00:22:43.867 "method": "sock_impl_set_options", 00:22:43.867 "params": { 00:22:43.868 "impl_name": "posix", 00:22:43.868 "recv_buf_size": 2097152, 00:22:43.868 "send_buf_size": 2097152, 00:22:43.868 "enable_recv_pipe": true, 00:22:43.868 "enable_quickack": false, 00:22:43.868 "enable_placement_id": 0, 00:22:43.868 "enable_zerocopy_send_server": true, 00:22:43.868 "enable_zerocopy_send_client": false, 00:22:43.868 "zerocopy_threshold": 0, 00:22:43.868 "tls_version": 0, 00:22:43.868 "enable_ktls": false 00:22:43.868 } 00:22:43.868 } 00:22:43.868 ] 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "subsystem": "vmd", 00:22:43.868 "config": [] 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "subsystem": "accel", 00:22:43.868 "config": [ 00:22:43.868 { 00:22:43.868 "method": "accel_set_options", 00:22:43.868 "params": { 00:22:43.868 "small_cache_size": 128, 00:22:43.868 "large_cache_size": 16, 00:22:43.868 "task_count": 2048, 00:22:43.868 "sequence_count": 2048, 00:22:43.868 "buf_count": 2048 00:22:43.868 } 00:22:43.868 } 00:22:43.868 ] 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "subsystem": "bdev", 00:22:43.868 "config": [ 00:22:43.868 { 00:22:43.868 "method": "bdev_set_options", 00:22:43.868 "params": { 00:22:43.868 "bdev_io_pool_size": 65535, 00:22:43.868 "bdev_io_cache_size": 256, 00:22:43.868 "bdev_auto_examine": true, 00:22:43.868 "iobuf_small_cache_size": 128, 00:22:43.868 "iobuf_large_cache_size": 16 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_raid_set_options", 00:22:43.868 "params": { 00:22:43.868 "process_window_size_kb": 1024, 00:22:43.868 "process_max_bandwidth_mb_sec": 0 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_iscsi_set_options", 00:22:43.868 "params": { 00:22:43.868 "timeout_sec": 30 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_nvme_set_options", 00:22:43.868 "params": { 00:22:43.868 "action_on_timeout": "none", 00:22:43.868 "timeout_us": 0, 00:22:43.868 "timeout_admin_us": 0, 00:22:43.868 "keep_alive_timeout_ms": 10000, 00:22:43.868 "arbitration_burst": 0, 00:22:43.868 "low_priority_weight": 0, 00:22:43.868 "medium_priority_weight": 0, 00:22:43.868 "high_priority_weight": 0, 00:22:43.868 "nvme_adminq_poll_period_us": 10000, 00:22:43.868 "nvme_ioq_poll_period_us": 0, 00:22:43.868 "io_queue_requests": 512, 00:22:43.868 "delay_cmd_submit": true, 00:22:43.868 "transport_retry_count": 4, 00:22:43.868 "bdev_retry_count": 3, 00:22:43.868 "transport_ack_timeout": 0, 00:22:43.868 "ctrlr_loss_timeout_sec": 0, 00:22:43.868 "reconnect_delay_sec": 0, 00:22:43.868 "fast_io_fail_timeout_sec": 0, 00:22:43.868 "disable_auto_failback": false, 00:22:43.868 "generate_uuids": false, 00:22:43.868 "transport_tos": 0, 00:22:43.868 "nvme_error_stat": false, 00:22:43.868 "rdma_srq_size": 0, 00:22:43.868 "io_path_stat": false, 00:22:43.868 "allow_accel_sequence": false, 00:22:43.868 "rdma_max_cq_size": 0, 00:22:43.868 "rdma_cm_event_timeout_ms": 0, 00:22:43.868 "dhchap_digests": [ 00:22:43.868 "sha256", 00:22:43.868 "sha384", 00:22:43.868 "sha512" 00:22:43.868 ], 00:22:43.868 "dhchap_dhgroups": [ 00:22:43.868 "null", 00:22:43.868 "ffdhe2048", 00:22:43.868 "ffdhe3072", 00:22:43.868 "ffdhe4096", 00:22:43.868 "ffdhe6144", 00:22:43.868 "ffdhe8192" 00:22:43.868 ] 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_nvme_attach_controller", 00:22:43.868 "params": { 00:22:43.868 "name": "nvme0", 00:22:43.868 "trtype": "TCP", 00:22:43.868 "adrfam": "IPv4", 00:22:43.868 "traddr": "10.0.0.2", 00:22:43.868 "trsvcid": "4420", 00:22:43.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.868 "prchk_reftag": false, 00:22:43.868 "prchk_guard": false, 00:22:43.868 "ctrlr_loss_timeout_sec": 0, 00:22:43.868 "reconnect_delay_sec": 0, 00:22:43.868 "fast_io_fail_timeout_sec": 0, 00:22:43.868 "psk": "key0", 00:22:43.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.868 "hdgst": false, 00:22:43.868 "ddgst": false 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_nvme_set_hotplug", 00:22:43.868 "params": { 00:22:43.868 "period_us": 100000, 00:22:43.868 "enable": false 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_enable_histogram", 00:22:43.868 "params": { 00:22:43.868 "name": "nvme0n1", 00:22:43.868 "enable": true 00:22:43.868 } 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "method": "bdev_wait_for_examine" 00:22:43.868 } 00:22:43.868 ] 00:22:43.868 }, 00:22:43.868 { 00:22:43.868 "subsystem": "nbd", 00:22:43.868 "config": [] 00:22:43.868 } 00:22:43.868 ] 00:22:43.868 }' 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 869773 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 869773 ']' 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 869773 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 869773 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 869773' 00:22:43.868 killing process with pid 869773 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 869773 00:22:43.868 Received shutdown signal, test time was about 1.000000 seconds 00:22:43.868 00:22:43.868 Latency(us) 00:22:43.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.868 =================================================================================================================== 00:22:43.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.868 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 869773 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 869725 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 869725 ']' 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 869725 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 869725 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 869725' 00:22:44.126 killing process with pid 869725 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 869725 00:22:44.126 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 869725 00:22:44.383 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:44.384 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.384 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:44.384 "subsystems": [ 00:22:44.384 { 00:22:44.384 "subsystem": "keyring", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "keyring_file_add_key", 00:22:44.384 "params": { 00:22:44.384 "name": "key0", 00:22:44.384 "path": "/tmp/tmp.SZd6Lb4Q9X" 00:22:44.384 } 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "iobuf", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "iobuf_set_options", 00:22:44.384 "params": { 00:22:44.384 "small_pool_count": 8192, 00:22:44.384 "large_pool_count": 1024, 00:22:44.384 "small_bufsize": 8192, 00:22:44.384 "large_bufsize": 135168 00:22:44.384 } 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "sock", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "sock_set_default_impl", 00:22:44.384 "params": { 00:22:44.384 "impl_name": "posix" 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "sock_impl_set_options", 00:22:44.384 "params": { 00:22:44.384 "impl_name": "ssl", 00:22:44.384 "recv_buf_size": 4096, 00:22:44.384 "send_buf_size": 4096, 00:22:44.384 "enable_recv_pipe": true, 00:22:44.384 "enable_quickack": false, 00:22:44.384 "enable_placement_id": 0, 00:22:44.384 "enable_zerocopy_send_server": true, 00:22:44.384 "enable_zerocopy_send_client": false, 00:22:44.384 "zerocopy_threshold": 0, 00:22:44.384 "tls_version": 0, 00:22:44.384 "enable_ktls": false 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "sock_impl_set_options", 00:22:44.384 "params": { 00:22:44.384 "impl_name": "posix", 00:22:44.384 "recv_buf_size": 2097152, 00:22:44.384 "send_buf_size": 2097152, 00:22:44.384 "enable_recv_pipe": true, 00:22:44.384 "enable_quickack": false, 00:22:44.384 "enable_placement_id": 0, 00:22:44.384 "enable_zerocopy_send_server": true, 00:22:44.384 "enable_zerocopy_send_client": false, 00:22:44.384 "zerocopy_threshold": 0, 00:22:44.384 "tls_version": 0, 00:22:44.384 "enable_ktls": false 00:22:44.384 } 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "vmd", 00:22:44.384 "config": [] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "accel", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "accel_set_options", 00:22:44.384 "params": { 00:22:44.384 "small_cache_size": 128, 00:22:44.384 "large_cache_size": 16, 00:22:44.384 "task_count": 2048, 00:22:44.384 "sequence_count": 2048, 00:22:44.384 "buf_count": 2048 00:22:44.384 } 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "bdev", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "bdev_set_options", 00:22:44.384 "params": { 00:22:44.384 "bdev_io_pool_size": 65535, 00:22:44.384 "bdev_io_cache_size": 256, 00:22:44.384 "bdev_auto_examine": true, 00:22:44.384 "iobuf_small_cache_size": 128, 00:22:44.384 "iobuf_large_cache_size": 16 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_raid_set_options", 00:22:44.384 "params": { 00:22:44.384 "process_window_size_kb": 1024, 00:22:44.384 "process_max_bandwidth_mb_sec": 0 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_iscsi_set_options", 00:22:44.384 "params": { 00:22:44.384 "timeout_sec": 30 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_nvme_set_options", 00:22:44.384 "params": { 00:22:44.384 "action_on_timeout": "none", 00:22:44.384 "timeout_us": 0, 00:22:44.384 "timeout_admin_us": 0, 00:22:44.384 "keep_alive_timeout_ms": 10000, 00:22:44.384 "arbitration_burst": 0, 00:22:44.384 "low_priority_weight": 0, 00:22:44.384 "medium_priority_weight": 0, 00:22:44.384 "high_priority_weight": 0, 00:22:44.384 "nvme_adminq_poll_period_us": 10000, 00:22:44.384 "nvme_ioq_poll_period_us": 0, 00:22:44.384 "io_queue_requests": 0, 00:22:44.384 "delay_cmd_submit": true, 00:22:44.384 "transport_retry_count": 4, 00:22:44.384 "bdev_retry_count": 3, 00:22:44.384 "transport_ack_timeout": 0, 00:22:44.384 "ctrlr_loss_timeout_sec": 0, 00:22:44.384 "reconnect_delay_sec": 0, 00:22:44.384 "fast_io_fail_timeout_sec": 0, 00:22:44.384 "disable_auto_failback": false, 00:22:44.384 "generate_uuids": false, 00:22:44.384 "transport_tos": 0, 00:22:44.384 "nvme_error_stat": false, 00:22:44.384 "rdma_srq_size": 0, 00:22:44.384 "io_path_stat": false, 00:22:44.384 "allow_accel_sequence": false, 00:22:44.384 "rdma_max_cq_size": 0, 00:22:44.384 "rdma_cm_event_timeout_ms": 0, 00:22:44.384 "dhchap_digests": [ 00:22:44.384 "sha256", 00:22:44.384 "sha384", 00:22:44.384 "sha512" 00:22:44.384 ], 00:22:44.384 "dhchap_dhgroups": [ 00:22:44.384 "null", 00:22:44.384 "ffdhe2048", 00:22:44.384 "ffdhe3072", 00:22:44.384 "ffdhe4096", 00:22:44.384 "ffdhe6144", 00:22:44.384 "ffdhe8192" 00:22:44.384 ] 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_nvme_set_hotplug", 00:22:44.384 "params": { 00:22:44.384 "period_us": 100000, 00:22:44.384 "enable": false 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_malloc_create", 00:22:44.384 "params": { 00:22:44.384 "name": "malloc0", 00:22:44.384 "num_blocks": 8192, 00:22:44.384 "block_size": 4096, 00:22:44.384 "physical_block_size": 4096, 00:22:44.384 "uuid": "ea89dc0d-eff0-48b9-9193-ac428cfd7983", 00:22:44.384 "optimal_io_boundary": 0, 00:22:44.384 "md_size": 0, 00:22:44.384 "dif_type": 0, 00:22:44.384 "dif_is_head_of_md": false, 00:22:44.384 "dif_pi_format": 0 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "bdev_wait_for_examine" 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "nbd", 00:22:44.384 "config": [] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "scheduler", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "framework_set_scheduler", 00:22:44.384 "params": { 00:22:44.384 "name": "static" 00:22:44.384 } 00:22:44.384 } 00:22:44.384 ] 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "subsystem": "nvmf", 00:22:44.384 "config": [ 00:22:44.384 { 00:22:44.384 "method": "nvmf_set_config", 00:22:44.384 "params": { 00:22:44.384 "discovery_filter": "match_any", 00:22:44.384 "admin_cmd_passthru": { 00:22:44.384 "identify_ctrlr": false 00:22:44.384 } 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "nvmf_set_max_subsystems", 00:22:44.384 "params": { 00:22:44.384 "max_subsystems": 1024 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "nvmf_set_crdt", 00:22:44.384 "params": { 00:22:44.384 "crdt1": 0, 00:22:44.384 "crdt2": 0, 00:22:44.384 "crdt3": 0 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "nvmf_create_transport", 00:22:44.384 "params": { 00:22:44.384 "trtype": "TCP", 00:22:44.384 "max_queue_depth": 128, 00:22:44.384 "max_io_qpairs_per_ctrlr": 127, 00:22:44.384 "in_capsule_data_size": 4096, 00:22:44.384 "max_io_size": 131072, 00:22:44.384 "io_unit_size": 131072, 00:22:44.384 "max_aq_depth": 128, 00:22:44.384 "num_shared_buffers": 511, 00:22:44.384 "buf_cache_size": 4294967295, 00:22:44.384 "dif_insert_or_strip": false, 00:22:44.384 "zcopy": false, 00:22:44.384 "c2h_success": false, 00:22:44.384 "sock_priority": 0, 00:22:44.384 "abort_timeout_sec": 1, 00:22:44.384 "ack_timeout": 0, 00:22:44.384 "data_wr_pool_size": 0 00:22:44.384 } 00:22:44.384 }, 00:22:44.384 { 00:22:44.384 "method": "nvmf_create_subsystem", 00:22:44.384 "params": { 00:22:44.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.384 "allow_any_host": false, 00:22:44.385 "serial_number": "00000000000000000000", 00:22:44.385 "model_number": "SPDK bdev Controller", 00:22:44.385 "max_namespaces": 32, 00:22:44.385 "min_cntlid": 1, 00:22:44.385 "max_cntlid": 65519, 00:22:44.385 "ana_reporting": false 00:22:44.385 } 00:22:44.385 }, 00:22:44.385 { 00:22:44.385 "method": "nvmf_subsystem_add_host", 00:22:44.385 "params": { 00:22:44.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.385 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.385 "psk": "key0" 00:22:44.385 } 00:22:44.385 }, 00:22:44.385 { 00:22:44.385 "method": "nvmf_subsystem_add_ns", 00:22:44.385 "params": { 00:22:44.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.385 "namespace": { 00:22:44.385 "nsid": 1, 00:22:44.385 "bdev_name": "malloc0", 00:22:44.385 "nguid": "EA89DC0DEFF048B99193AC428CFD7983", 00:22:44.385 "uuid": "ea89dc0d-eff0-48b9-9193-ac428cfd7983", 00:22:44.385 "no_auto_visible": false 00:22:44.385 } 00:22:44.385 } 00:22:44.385 }, 00:22:44.385 { 00:22:44.385 "method": "nvmf_subsystem_add_listener", 00:22:44.385 "params": { 00:22:44.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.385 "listen_address": { 00:22:44.385 "trtype": "TCP", 00:22:44.385 "adrfam": "IPv4", 00:22:44.385 "traddr": "10.0.0.2", 00:22:44.385 "trsvcid": "4420" 00:22:44.385 }, 00:22:44.385 "secure_channel": false, 00:22:44.385 "sock_impl": "ssl" 00:22:44.385 } 00:22:44.385 } 00:22:44.385 ] 00:22:44.385 } 00:22:44.385 ] 00:22:44.385 }' 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=870179 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 870179 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 870179 ']' 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.385 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.385 [2024-07-25 04:05:59.526400] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:44.385 [2024-07-25 04:05:59.526482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.385 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.385 [2024-07-25 04:05:59.563821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.385 [2024-07-25 04:05:59.591412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.385 [2024-07-25 04:05:59.680265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.385 [2024-07-25 04:05:59.680317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.385 [2024-07-25 04:05:59.680341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.385 [2024-07-25 04:05:59.680353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.385 [2024-07-25 04:05:59.680364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.385 [2024-07-25 04:05:59.680447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.642 [2024-07-25 04:05:59.925630] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.899 [2024-07-25 04:05:59.966121] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.899 [2024-07-25 04:05:59.966407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=870301 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 870301 /var/tmp/bdevperf.sock 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 870301 ']' 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:45.465 "subsystems": [ 00:22:45.465 { 00:22:45.465 "subsystem": "keyring", 00:22:45.465 "config": [ 00:22:45.465 { 00:22:45.465 "method": "keyring_file_add_key", 00:22:45.465 "params": { 00:22:45.465 "name": "key0", 00:22:45.465 "path": "/tmp/tmp.SZd6Lb4Q9X" 00:22:45.465 } 00:22:45.465 } 00:22:45.465 ] 00:22:45.465 }, 00:22:45.465 { 00:22:45.465 "subsystem": "iobuf", 00:22:45.465 "config": [ 00:22:45.465 { 00:22:45.465 "method": "iobuf_set_options", 00:22:45.465 "params": { 00:22:45.465 "small_pool_count": 8192, 00:22:45.465 "large_pool_count": 1024, 00:22:45.465 "small_bufsize": 8192, 00:22:45.465 "large_bufsize": 135168 00:22:45.465 } 00:22:45.465 } 00:22:45.465 ] 00:22:45.465 }, 00:22:45.465 { 00:22:45.465 "subsystem": "sock", 00:22:45.465 "config": [ 00:22:45.465 { 00:22:45.465 "method": "sock_set_default_impl", 00:22:45.465 "params": { 00:22:45.465 "impl_name": "posix" 00:22:45.465 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "sock_impl_set_options", 00:22:45.466 "params": { 00:22:45.466 "impl_name": "ssl", 00:22:45.466 "recv_buf_size": 4096, 00:22:45.466 "send_buf_size": 4096, 00:22:45.466 "enable_recv_pipe": true, 00:22:45.466 "enable_quickack": false, 00:22:45.466 "enable_placement_id": 0, 00:22:45.466 "enable_zerocopy_send_server": true, 00:22:45.466 "enable_zerocopy_send_client": false, 00:22:45.466 "zerocopy_threshold": 0, 00:22:45.466 "tls_version": 0, 00:22:45.466 "enable_ktls": false 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "sock_impl_set_options", 00:22:45.466 "params": { 00:22:45.466 "impl_name": "posix", 00:22:45.466 "recv_buf_size": 2097152, 00:22:45.466 "send_buf_size": 2097152, 00:22:45.466 "enable_recv_pipe": true, 00:22:45.466 "enable_quickack": false, 00:22:45.466 "enable_placement_id": 0, 00:22:45.466 "enable_zerocopy_send_server": true, 00:22:45.466 "enable_zerocopy_send_client": false, 00:22:45.466 "zerocopy_threshold": 0, 00:22:45.466 "tls_version": 0, 00:22:45.466 "enable_ktls": false 00:22:45.466 } 00:22:45.466 } 00:22:45.466 ] 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "subsystem": "vmd", 00:22:45.466 "config": [] 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "subsystem": "accel", 00:22:45.466 "config": [ 00:22:45.466 { 00:22:45.466 "method": "accel_set_options", 00:22:45.466 "params": { 00:22:45.466 "small_cache_size": 128, 00:22:45.466 "large_cache_size": 16, 00:22:45.466 "task_count": 2048, 00:22:45.466 "sequence_count": 2048, 00:22:45.466 "buf_count": 2048 00:22:45.466 } 00:22:45.466 } 00:22:45.466 ] 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "subsystem": "bdev", 00:22:45.466 "config": [ 00:22:45.466 { 00:22:45.466 "method": "bdev_set_options", 00:22:45.466 "params": { 00:22:45.466 "bdev_io_pool_size": 65535, 00:22:45.466 "bdev_io_cache_size": 256, 00:22:45.466 "bdev_auto_examine": true, 00:22:45.466 "iobuf_small_cache_size": 128, 00:22:45.466 "iobuf_large_cache_size": 16 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_raid_set_options", 00:22:45.466 "params": { 00:22:45.466 "process_window_size_kb": 1024, 00:22:45.466 "process_max_bandwidth_mb_sec": 0 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_iscsi_set_options", 00:22:45.466 "params": { 00:22:45.466 "timeout_sec": 30 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_nvme_set_options", 00:22:45.466 "params": { 00:22:45.466 "action_on_timeout": "none", 00:22:45.466 "timeout_us": 0, 00:22:45.466 "timeout_admin_us": 0, 00:22:45.466 "keep_alive_timeout_ms": 10000, 00:22:45.466 "arbitration_burst": 0, 00:22:45.466 "low_priority_weight": 0, 00:22:45.466 "medium_priority_weight": 0, 00:22:45.466 "high_priority_weight": 0, 00:22:45.466 "nvme_adminq_poll_period_us": 10000, 00:22:45.466 "nvme_ioq_poll_period_us": 0, 00:22:45.466 "io_queue_requests": 512, 00:22:45.466 "delay_cmd_submit": true, 00:22:45.466 "transport_retry_count": 4, 00:22:45.466 "bdev_retry_count": 3, 00:22:45.466 "transport_ack_timeout": 0, 00:22:45.466 "ctrlr_loss_timeout_sec": 0, 00:22:45.466 "reconnect_delay_sec": 0, 00:22:45.466 "fast_io_fail_timeout_sec": 0, 00:22:45.466 "disable_auto_failback": false, 00:22:45.466 "generate_uuids": false, 00:22:45.466 "transport_tos": 0, 00:22:45.466 "nvme_error_stat": false, 00:22:45.466 "rdma_srq_size": 0, 00:22:45.466 "io_path_stat": false, 00:22:45.466 "allow_accel_sequence": false, 00:22:45.466 "rdma_max_cq_size": 0, 00:22:45.466 "rdma_cm_event_timeout_ms": 0, 00:22:45.466 "dhchap_digests": [ 00:22:45.466 "sha256", 00:22:45.466 "sha384", 00:22:45.466 "sha512" 00:22:45.466 ], 00:22:45.466 "dhchap_dhgroups": [ 00:22:45.466 "null", 00:22:45.466 "ffdhe2048", 00:22:45.466 "ffdhe3072", 00:22:45.466 "ffdhe4096", 00:22:45.466 "ffdhe6144", 00:22:45.466 "ffdhe8192" 00:22:45.466 ] 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_nvme_attach_controller", 00:22:45.466 "params": { 00:22:45.466 "name": "nvme0", 00:22:45.466 "trtype": "TCP", 00:22:45.466 "adrfam": "IPv4", 00:22:45.466 "traddr": "10.0.0.2", 00:22:45.466 "trsvcid": "4420", 00:22:45.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.466 "prchk_reftag": false, 00:22:45.466 "prchk_guard": false, 00:22:45.466 "ctrlr_loss_timeout_sec": 0, 00:22:45.466 "reconnect_delay_sec": 0, 00:22:45.466 "fast_io_fail_timeout_sec": 0, 00:22:45.466 "psk": "key0", 00:22:45.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.466 "hdgst": false, 00:22:45.466 "ddgst": false 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_nvme_set_hotplug", 00:22:45.466 "params": { 00:22:45.466 "period_us": 100000, 00:22:45.466 "enable": false 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_enable_histogram", 00:22:45.466 "params": { 00:22:45.466 "name": "nvme0n1", 00:22:45.466 "enable": true 00:22:45.466 } 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "method": "bdev_wait_for_examine" 00:22:45.466 } 00:22:45.466 ] 00:22:45.466 }, 00:22:45.466 { 00:22:45.466 "subsystem": "nbd", 00:22:45.466 "config": [] 00:22:45.466 } 00:22:45.466 ] 00:22:45.466 }' 00:22:45.466 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.466 [2024-07-25 04:06:00.540160] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:45.466 [2024-07-25 04:06:00.540263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870301 ] 00:22:45.466 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.466 [2024-07-25 04:06:00.575081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:45.466 [2024-07-25 04:06:00.605619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.466 [2024-07-25 04:06:00.698725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.724 [2024-07-25 04:06:00.872671] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.288 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.288 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:46.288 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.288 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:46.546 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.546 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.803 Running I/O for 1 seconds... 00:22:47.732 00:22:47.732 Latency(us) 00:22:47.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.732 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.732 Verification LBA range: start 0x0 length 0x2000 00:22:47.732 nvme0n1 : 1.05 2312.76 9.03 0.00 0.00 54232.02 7912.87 92430.03 00:22:47.732 =================================================================================================================== 00:22:47.732 Total : 2312.76 9.03 0.00 0.00 54232.02 7912.87 92430.03 00:22:47.732 0 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:47.732 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:47.732 nvmf_trace.0 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 870301 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 870301 ']' 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 870301 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.732 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870301 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870301' 00:22:47.989 killing process with pid 870301 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 870301 00:22:47.989 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.989 00:22:47.989 Latency(us) 00:22:47.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.989 =================================================================================================================== 00:22:47.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 870301 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.989 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.989 rmmod nvme_tcp 00:22:47.989 rmmod nvme_fabrics 00:22:47.989 rmmod nvme_keyring 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 870179 ']' 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 870179 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 870179 ']' 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 870179 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870179 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870179' 00:22:48.247 killing process with pid 870179 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 870179 00:22:48.247 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 870179 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.504 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.k9KbtF79su /tmp/tmp.3hM0slHT6i /tmp/tmp.SZd6Lb4Q9X 00:22:50.403 00:22:50.403 real 1m18.976s 00:22:50.403 user 2m6.963s 00:22:50.403 sys 0m26.998s 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.403 ************************************ 00:22:50.403 END TEST nvmf_tls 00:22:50.403 ************************************ 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.403 ************************************ 00:22:50.403 START TEST nvmf_fips 00:22:50.403 ************************************ 00:22:50.403 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:50.662 * Looking for test storage... 00:22:50.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:50.662 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:50.663 Error setting digest 00:22:50.663 00728F45017F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:50.663 00728F45017F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.663 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.563 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.564 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.564 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:52.565 00:22:52.565 --- 10.0.0.2 ping statistics --- 00:22:52.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.565 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:52.565 00:22:52.565 --- 10.0.0.1 ping statistics --- 00:22:52.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.565 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.565 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=872931 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 872931 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 872931 ']' 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.823 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.823 [2024-07-25 04:06:07.940188] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:52.823 [2024-07-25 04:06:07.940303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.823 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.823 [2024-07-25 04:06:07.979396] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.823 [2024-07-25 04:06:08.009603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.823 [2024-07-25 04:06:08.103759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.823 [2024-07-25 04:06:08.103843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.823 [2024-07-25 04:06:08.103871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.823 [2024-07-25 04:06:08.103885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.823 [2024-07-25 04:06:08.103896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.823 [2024-07-25 04:06:08.103925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.081 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.339 [2024-07-25 04:06:08.495107] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.339 [2024-07-25 04:06:08.511099] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.339 [2024-07-25 04:06:08.511372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.339 [2024-07-25 04:06:08.541854] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.339 malloc0 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=873214 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 873214 /var/tmp/bdevperf.sock 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 873214 ']' 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.339 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:53.339 [2024-07-25 04:06:08.636555] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:22:53.339 [2024-07-25 04:06:08.636680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873214 ] 00:22:53.597 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.597 [2024-07-25 04:06:08.669672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:53.597 [2024-07-25 04:06:08.697120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.597 [2024-07-25 04:06:08.788191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.597 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.597 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:53.597 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.854 [2024-07-25 04:06:09.135057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.854 [2024-07-25 04:06:09.135179] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.112 TLSTESTn1 00:22:54.112 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:54.112 Running I/O for 10 seconds... 00:23:06.302 00:23:06.302 Latency(us) 00:23:06.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.302 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.302 Verification LBA range: start 0x0 length 0x2000 00:23:06.302 TLSTESTn1 : 10.03 3387.03 13.23 0.00 0.00 37703.50 6213.78 69516.71 00:23:06.302 =================================================================================================================== 00:23:06.302 Total : 3387.03 13.23 0.00 0.00 37703.50 6213.78 69516.71 00:23:06.302 0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:06.302 nvmf_trace.0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 873214 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 873214 ']' 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 873214 00:23:06.302 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873214 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873214' 00:23:06.303 killing process with pid 873214 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 873214 00:23:06.303 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.303 00:23:06.303 Latency(us) 00:23:06.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.303 =================================================================================================================== 00:23:06.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.303 [2024-07-25 04:06:19.512654] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 873214 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.303 rmmod nvme_tcp 00:23:06.303 rmmod nvme_fabrics 00:23:06.303 rmmod nvme_keyring 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 872931 ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 872931 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 872931 ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 872931 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872931 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872931' 00:23:06.303 killing process with pid 872931 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 872931 00:23:06.303 [2024-07-25 04:06:19.818424] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:06.303 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 872931 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.303 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.872 00:23:06.872 real 0m16.432s 00:23:06.872 user 0m20.892s 00:23:06.872 sys 0m5.882s 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.872 ************************************ 00:23:06.872 END TEST nvmf_fips 00:23:06.872 ************************************ 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:06.872 ************************************ 00:23:06.872 START TEST nvmf_fuzz 00:23:06.872 ************************************ 00:23:06.872 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:07.149 * Looking for test storage... 00:23:07.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.149 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.150 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:09.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:09.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:09.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:09.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:23:09.048 00:23:09.048 --- 10.0.0.2 ping statistics --- 00:23:09.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.048 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:23:09.048 00:23:09.048 --- 10.0.0.1 ping statistics --- 00:23:09.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.048 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.048 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=876465 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 876465 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 876465 ']' 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.049 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 Malloc0 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:09.615 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:41.679 Fuzzing completed. Shutting down the fuzz application 00:23:41.679 00:23:41.679 Dumping successful admin opcodes: 00:23:41.679 8, 9, 10, 24, 00:23:41.679 Dumping successful io opcodes: 00:23:41.679 0, 9, 00:23:41.679 NS: 0x200003aeff00 I/O qp, Total commands completed: 452586, total successful commands: 2629, random_seed: 1672812736 00:23:41.679 NS: 0x200003aeff00 admin qp, Total commands completed: 55248, total successful commands: 441, random_seed: 457125824 00:23:41.679 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:41.679 Fuzzing completed. Shutting down the fuzz application 00:23:41.679 00:23:41.679 Dumping successful admin opcodes: 00:23:41.679 24, 00:23:41.679 Dumping successful io opcodes: 00:23:41.679 00:23:41.679 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 882759552 00:23:41.679 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 882877899 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.679 rmmod nvme_tcp 00:23:41.679 rmmod nvme_fabrics 00:23:41.679 rmmod nvme_keyring 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 876465 ']' 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 876465 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 876465 ']' 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 876465 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 876465 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 876465' 00:23:41.679 killing process with pid 876465 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 876465 00:23:41.679 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 876465 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.936 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:44.461 00:23:44.461 real 0m37.166s 00:23:44.461 user 0m51.057s 00:23:44.461 sys 0m15.543s 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:44.461 ************************************ 00:23:44.461 END TEST nvmf_fuzz 00:23:44.461 ************************************ 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:44.461 ************************************ 00:23:44.461 START TEST nvmf_multiconnection 00:23:44.461 ************************************ 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:44.461 * Looking for test storage... 00:23:44.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.461 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:46.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:46.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.360 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:46.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:46.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:46.361 00:23:46.361 --- 10.0.0.2 ping statistics --- 00:23:46.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.361 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:23:46.361 00:23:46.361 --- 10.0.0.1 ping statistics --- 00:23:46.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.361 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=882183 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 882183 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 882183 ']' 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.361 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.361 [2024-07-25 04:07:01.606702] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:23:46.361 [2024-07-25 04:07:01.606800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.361 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.361 [2024-07-25 04:07:01.645616] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:46.620 [2024-07-25 04:07:01.677418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.620 [2024-07-25 04:07:01.770482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.620 [2024-07-25 04:07:01.770560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.620 [2024-07-25 04:07:01.770585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.620 [2024-07-25 04:07:01.770598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.620 [2024-07-25 04:07:01.770610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.620 [2024-07-25 04:07:01.770699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.620 [2024-07-25 04:07:01.770751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.620 [2024-07-25 04:07:01.770815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.620 [2024-07-25 04:07:01.770817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.620 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.620 [2024-07-25 04:07:01.916392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 Malloc1 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 [2024-07-25 04:07:01.972338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 Malloc2 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 Malloc3 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 Malloc4 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 Malloc5 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:46.879 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.880 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc6 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc7 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc8 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc9 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc10 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 Malloc11 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.139 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.396 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:47.961 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:47.961 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:47.961 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.961 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:47.961 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.859 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:50.790 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:50.790 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.790 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.790 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:50.790 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.685 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:53.257 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:53.257 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:53.257 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:53.257 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:53.257 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.155 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.155 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.155 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:55.413 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.413 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.413 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.413 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.413 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:55.978 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:55.978 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:55.978 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.978 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:55.978 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.876 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:58.808 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:58.808 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.808 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.808 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:58.808 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.706 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:01.639 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:01.639 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:01.639 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.639 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:01.639 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:03.537 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:03.537 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:03.537 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:03.537 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:03.538 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.538 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:03.538 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.538 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:04.471 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:04.471 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:04.471 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.471 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:04.471 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.369 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:07.301 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:07.301 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.301 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.301 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.301 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.196 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:10.128 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:10.128 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:10.128 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.128 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:10.128 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.025 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:12.955 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:12.955 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.955 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.955 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:12.955 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:14.894 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:14.894 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:14.894 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:14.894 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:14.894 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.151 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:15.151 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.151 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:16.083 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:16.083 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:16.083 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.083 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:16.083 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:17.976 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:17.976 [global] 00:24:17.976 thread=1 00:24:17.976 invalidate=1 00:24:17.976 rw=read 00:24:17.976 time_based=1 00:24:17.976 runtime=10 00:24:17.976 ioengine=libaio 00:24:17.976 direct=1 00:24:17.976 bs=262144 00:24:17.976 iodepth=64 00:24:17.976 norandommap=1 00:24:17.976 numjobs=1 00:24:17.976 00:24:17.976 [job0] 00:24:17.976 filename=/dev/nvme0n1 00:24:17.976 [job1] 00:24:17.976 filename=/dev/nvme10n1 00:24:17.976 [job2] 00:24:17.976 filename=/dev/nvme1n1 00:24:17.976 [job3] 00:24:17.976 filename=/dev/nvme2n1 00:24:17.976 [job4] 00:24:17.976 filename=/dev/nvme3n1 00:24:17.976 [job5] 00:24:17.976 filename=/dev/nvme4n1 00:24:17.976 [job6] 00:24:17.976 filename=/dev/nvme5n1 00:24:17.976 [job7] 00:24:17.976 filename=/dev/nvme6n1 00:24:17.976 [job8] 00:24:17.976 filename=/dev/nvme7n1 00:24:17.977 [job9] 00:24:17.977 filename=/dev/nvme8n1 00:24:17.977 [job10] 00:24:17.977 filename=/dev/nvme9n1 00:24:17.977 Could not set queue depth (nvme0n1) 00:24:17.977 Could not set queue depth (nvme10n1) 00:24:17.977 Could not set queue depth (nvme1n1) 00:24:17.977 Could not set queue depth (nvme2n1) 00:24:17.977 Could not set queue depth (nvme3n1) 00:24:17.977 Could not set queue depth (nvme4n1) 00:24:17.977 Could not set queue depth (nvme5n1) 00:24:17.977 Could not set queue depth (nvme6n1) 00:24:17.977 Could not set queue depth (nvme7n1) 00:24:17.977 Could not set queue depth (nvme8n1) 00:24:17.977 Could not set queue depth (nvme9n1) 00:24:18.234 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:18.234 fio-3.35 00:24:18.234 Starting 11 threads 00:24:30.447 00:24:30.447 job0: (groupid=0, jobs=1): err= 0: pid=886426: Thu Jul 25 04:07:43 2024 00:24:30.447 read: IOPS=945, BW=236MiB/s (248MB/s)(2388MiB/10100msec) 00:24:30.447 slat (usec): min=13, max=97114, avg=969.17, stdev=3770.93 00:24:30.447 clat (usec): min=1068, max=225535, avg=66647.45, stdev=41472.10 00:24:30.447 lat (usec): min=1086, max=267715, avg=67616.62, stdev=42158.93 00:24:30.447 clat percentiles (msec): 00:24:30.447 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 33], 00:24:30.447 | 30.00th=[ 35], 40.00th=[ 41], 50.00th=[ 52], 60.00th=[ 69], 00:24:30.447 | 70.00th=[ 87], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 144], 00:24:30.447 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 205], 99.95th=[ 205], 00:24:30.447 | 99.99th=[ 226] 00:24:30.447 bw ( KiB/s): min=109568, max=472064, per=12.63%, avg=242846.15, stdev=119824.46, samples=20 00:24:30.447 iops : min= 428, max= 1844, avg=948.55, stdev=468.06, samples=20 00:24:30.447 lat (msec) : 2=0.30%, 4=0.70%, 10=1.99%, 20=3.02%, 50=42.81% 00:24:30.447 lat (msec) : 100=27.42%, 250=23.76% 00:24:30.447 cpu : usr=0.57%, sys=2.54%, ctx=1946, majf=0, minf=4097 00:24:30.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:30.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.447 issued rwts: total=9552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.447 job1: (groupid=0, jobs=1): err= 0: pid=886427: Thu Jul 25 04:07:43 2024 00:24:30.447 read: IOPS=669, BW=167MiB/s (175MB/s)(1689MiB/10096msec) 00:24:30.447 slat (usec): min=11, max=56999, avg=1232.19, stdev=3684.84 00:24:30.447 clat (msec): min=4, max=212, avg=94.33, stdev=30.95 00:24:30.447 lat (msec): min=4, max=212, avg=95.57, stdev=31.43 00:24:30.447 clat percentiles (msec): 00:24:30.447 | 1.00th=[ 20], 5.00th=[ 44], 10.00th=[ 57], 20.00th=[ 71], 00:24:30.447 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 102], 00:24:30.447 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 131], 95.00th=[ 150], 00:24:30.447 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 211], 99.95th=[ 211], 00:24:30.447 | 99.99th=[ 213] 00:24:30.447 bw ( KiB/s): min=113152, max=239104, per=8.91%, avg=171301.20, stdev=37370.32, samples=20 00:24:30.447 iops : min= 442, max= 934, avg=669.10, stdev=145.95, samples=20 00:24:30.447 lat (msec) : 10=0.13%, 20=0.95%, 50=5.73%, 100=51.44%, 250=41.76% 00:24:30.447 cpu : usr=0.33%, sys=2.22%, ctx=1569, majf=0, minf=4097 00:24:30.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:30.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.447 issued rwts: total=6756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.447 job2: (groupid=0, jobs=1): err= 0: pid=886428: Thu Jul 25 04:07:43 2024 00:24:30.447 read: IOPS=622, BW=156MiB/s (163MB/s)(1571MiB/10092msec) 00:24:30.447 slat (usec): min=9, max=57340, avg=845.39, stdev=3342.68 00:24:30.447 clat (usec): min=824, max=213263, avg=101884.11, stdev=36176.15 00:24:30.447 lat (usec): min=856, max=213292, avg=102729.50, stdev=36527.23 00:24:30.447 clat percentiles (msec): 00:24:30.447 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 58], 20.00th=[ 80], 00:24:30.447 | 30.00th=[ 89], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 113], 00:24:30.447 | 70.00th=[ 123], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 155], 00:24:30.447 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 188], 00:24:30.447 | 99.99th=[ 213] 00:24:30.447 bw ( KiB/s): min=117760, max=239104, per=8.28%, avg=159196.10, stdev=32518.95, samples=20 00:24:30.447 iops : min= 460, max= 934, avg=621.80, stdev=127.01, samples=20 00:24:30.447 lat (usec) : 1000=0.11% 00:24:30.447 lat (msec) : 2=0.27%, 4=0.70%, 10=1.78%, 20=2.63%, 50=3.61% 00:24:30.447 lat (msec) : 100=32.98%, 250=57.92% 00:24:30.447 cpu : usr=0.26%, sys=1.66%, ctx=1865, majf=0, minf=4097 00:24:30.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:30.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=6283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job3: (groupid=0, jobs=1): err= 0: pid=886429: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=723, BW=181MiB/s (190MB/s)(1818MiB/10052msec) 00:24:30.448 slat (usec): min=14, max=77366, avg=1313.25, stdev=3686.78 00:24:30.448 clat (msec): min=10, max=186, avg=87.08, stdev=25.76 00:24:30.448 lat (msec): min=10, max=186, avg=88.40, stdev=26.22 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 29], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 67], 00:24:30.448 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 89], 00:24:30.448 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 133], 00:24:30.448 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 178], 00:24:30.448 | 99.99th=[ 186] 00:24:30.448 bw ( KiB/s): min=120320, max=247808, per=9.60%, avg=184518.85, stdev=43353.39, samples=20 00:24:30.448 iops : min= 470, max= 968, avg=720.70, stdev=169.41, samples=20 00:24:30.448 lat (msec) : 20=0.63%, 50=3.92%, 100=67.34%, 250=28.11% 00:24:30.448 cpu : usr=0.44%, sys=2.32%, ctx=1566, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=7272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job4: (groupid=0, jobs=1): err= 0: pid=886430: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=976, BW=244MiB/s (256MB/s)(2454MiB/10050msec) 00:24:30.448 slat (usec): min=9, max=66303, avg=965.53, stdev=2665.71 00:24:30.448 clat (msec): min=4, max=220, avg=64.51, stdev=25.58 00:24:30.448 lat (msec): min=4, max=220, avg=65.47, stdev=25.93 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 44], 00:24:30.448 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 67], 00:24:30.448 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 110], 00:24:30.448 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 178], 00:24:30.448 | 99.99th=[ 222] 00:24:30.448 bw ( KiB/s): min=151552, max=460800, per=12.99%, avg=249654.15, stdev=74498.80, samples=20 00:24:30.448 iops : min= 592, max= 1800, avg=975.15, stdev=291.01, samples=20 00:24:30.448 lat (msec) : 10=0.21%, 20=0.40%, 50=30.71%, 100=59.66%, 250=9.01% 00:24:30.448 cpu : usr=0.43%, sys=3.20%, ctx=2037, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=9817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job5: (groupid=0, jobs=1): err= 0: pid=886431: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=640, BW=160MiB/s (168MB/s)(1608MiB/10051msec) 00:24:30.448 slat (usec): min=14, max=47280, avg=1541.40, stdev=3838.26 00:24:30.448 clat (msec): min=30, max=187, avg=98.38, stdev=21.86 00:24:30.448 lat (msec): min=30, max=204, avg=99.92, stdev=22.21 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 60], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 80], 00:24:30.448 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 103], 00:24:30.448 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 128], 95.00th=[ 138], 00:24:30.448 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 188], 00:24:30.448 | 99.99th=[ 188] 00:24:30.448 bw ( KiB/s): min=122368, max=204800, per=8.48%, avg=163042.75, stdev=26733.57, samples=20 00:24:30.448 iops : min= 478, max= 800, avg=636.85, stdev=104.43, samples=20 00:24:30.448 lat (msec) : 50=0.48%, 100=55.59%, 250=43.93% 00:24:30.448 cpu : usr=0.44%, sys=1.98%, ctx=1385, majf=0, minf=3721 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=6433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job6: (groupid=0, jobs=1): err= 0: pid=886432: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=572, BW=143MiB/s (150MB/s)(1446MiB/10097msec) 00:24:30.448 slat (usec): min=13, max=116995, avg=1726.06, stdev=4849.11 00:24:30.448 clat (msec): min=36, max=228, avg=109.88, stdev=30.53 00:24:30.448 lat (msec): min=36, max=228, avg=111.60, stdev=31.05 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 84], 00:24:30.448 | 30.00th=[ 94], 40.00th=[ 103], 50.00th=[ 110], 60.00th=[ 118], 00:24:30.448 | 70.00th=[ 129], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 159], 00:24:30.448 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:24:30.448 | 99.99th=[ 230] 00:24:30.448 bw ( KiB/s): min=93883, max=267776, per=7.62%, avg=146435.00, stdev=41892.87, samples=20 00:24:30.448 iops : min= 366, max= 1046, avg=571.90, stdev=163.67, samples=20 00:24:30.448 lat (msec) : 50=1.33%, 100=36.66%, 250=62.01% 00:24:30.448 cpu : usr=0.33%, sys=1.96%, ctx=1245, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=5785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job7: (groupid=0, jobs=1): err= 0: pid=886437: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=607, BW=152MiB/s (159MB/s)(1534MiB/10098msec) 00:24:30.448 slat (usec): min=13, max=71984, avg=1535.87, stdev=4448.98 00:24:30.448 clat (msec): min=8, max=227, avg=103.71, stdev=33.24 00:24:30.448 lat (msec): min=8, max=227, avg=105.25, stdev=33.85 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 23], 5.00th=[ 51], 10.00th=[ 63], 20.00th=[ 75], 00:24:30.448 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 112], 00:24:30.448 | 70.00th=[ 124], 80.00th=[ 134], 90.00th=[ 146], 95.00th=[ 155], 00:24:30.448 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 215], 99.95th=[ 220], 00:24:30.448 | 99.99th=[ 228] 00:24:30.448 bw ( KiB/s): min=98816, max=259072, per=8.08%, avg=155410.80, stdev=41016.21, samples=20 00:24:30.448 iops : min= 386, max= 1012, avg=607.00, stdev=160.21, samples=20 00:24:30.448 lat (msec) : 10=0.10%, 20=0.65%, 50=4.07%, 100=40.09%, 250=55.08% 00:24:30.448 cpu : usr=0.41%, sys=1.97%, ctx=1383, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=6136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job8: (groupid=0, jobs=1): err= 0: pid=886466: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=600, BW=150MiB/s (157MB/s)(1511MiB/10064msec) 00:24:30.448 slat (usec): min=9, max=75025, avg=1354.01, stdev=4289.26 00:24:30.448 clat (msec): min=2, max=214, avg=105.14, stdev=44.74 00:24:30.448 lat (msec): min=2, max=227, avg=106.50, stdev=45.36 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 69], 00:24:30.448 | 30.00th=[ 87], 40.00th=[ 104], 50.00th=[ 114], 60.00th=[ 124], 00:24:30.448 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 169], 00:24:30.448 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 215], 99.95th=[ 215], 00:24:30.448 | 99.99th=[ 215] 00:24:30.448 bw ( KiB/s): min=110080, max=366080, per=7.96%, avg=153101.90, stdev=58597.55, samples=20 00:24:30.448 iops : min= 430, max= 1430, avg=598.00, stdev=228.92, samples=20 00:24:30.448 lat (msec) : 4=0.25%, 10=1.39%, 20=2.18%, 50=12.92%, 100=21.00% 00:24:30.448 lat (msec) : 250=62.26% 00:24:30.448 cpu : usr=0.32%, sys=1.82%, ctx=1464, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=6044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job9: (groupid=0, jobs=1): err= 0: pid=886485: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=607, BW=152MiB/s (159MB/s)(1529MiB/10067msec) 00:24:30.448 slat (usec): min=9, max=113847, avg=940.91, stdev=4079.66 00:24:30.448 clat (usec): min=1521, max=255818, avg=104333.65, stdev=41538.92 00:24:30.448 lat (usec): min=1541, max=259716, avg=105274.56, stdev=42032.76 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 61], 20.00th=[ 73], 00:24:30.448 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 104], 60.00th=[ 112], 00:24:30.448 | 70.00th=[ 122], 80.00th=[ 134], 90.00th=[ 157], 95.00th=[ 178], 00:24:30.448 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 253], 00:24:30.448 | 99.99th=[ 255] 00:24:30.448 bw ( KiB/s): min=108032, max=217600, per=8.06%, avg=154924.85, stdev=26961.26, samples=20 00:24:30.448 iops : min= 422, max= 850, avg=605.10, stdev=105.31, samples=20 00:24:30.448 lat (msec) : 2=0.13%, 4=0.20%, 10=0.69%, 20=1.50%, 50=5.26% 00:24:30.448 lat (msec) : 100=39.40%, 250=52.67%, 500=0.15% 00:24:30.448 cpu : usr=0.23%, sys=1.70%, ctx=1742, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=6116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 job10: (groupid=0, jobs=1): err= 0: pid=886505: Thu Jul 25 04:07:43 2024 00:24:30.448 read: IOPS=560, BW=140MiB/s (147MB/s)(1415MiB/10092msec) 00:24:30.448 slat (usec): min=8, max=303731, avg=1249.76, stdev=7276.60 00:24:30.448 clat (usec): min=934, max=489981, avg=112818.47, stdev=67898.35 00:24:30.448 lat (usec): min=953, max=490001, avg=114068.23, stdev=68659.97 00:24:30.448 clat percentiles (msec): 00:24:30.448 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 50], 20.00th=[ 75], 00:24:30.448 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 105], 60.00th=[ 112], 00:24:30.448 | 70.00th=[ 124], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 194], 00:24:30.448 | 99.00th=[ 451], 99.50th=[ 460], 99.90th=[ 464], 99.95th=[ 481], 00:24:30.448 | 99.99th=[ 489] 00:24:30.448 bw ( KiB/s): min=32833, max=252416, per=7.45%, avg=143205.55, stdev=56850.31, samples=20 00:24:30.448 iops : min= 128, max= 986, avg=559.30, stdev=222.09, samples=20 00:24:30.448 lat (usec) : 1000=0.05% 00:24:30.448 lat (msec) : 2=0.30%, 4=0.37%, 10=1.89%, 20=1.43%, 50=6.03% 00:24:30.448 lat (msec) : 100=33.21%, 250=52.63%, 500=4.08% 00:24:30.448 cpu : usr=0.24%, sys=1.72%, ctx=1480, majf=0, minf=4097 00:24:30.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:30.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.448 issued rwts: total=5658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.448 00:24:30.448 Run status group 0 (all jobs): 00:24:30.448 READ: bw=1878MiB/s (1969MB/s), 140MiB/s-244MiB/s (147MB/s-256MB/s), io=18.5GiB (19.9GB), run=10050-10100msec 00:24:30.448 00:24:30.448 Disk stats (read/write): 00:24:30.448 nvme0n1: ios=18894/0, merge=0/0, ticks=1233087/0, in_queue=1233087, util=97.14% 00:24:30.448 nvme10n1: ios=13256/0, merge=0/0, ticks=1231517/0, in_queue=1231517, util=97.36% 00:24:30.448 nvme1n1: ios=12346/0, merge=0/0, ticks=1244583/0, in_queue=1244583, util=97.62% 00:24:30.448 nvme2n1: ios=14269/0, merge=0/0, ticks=1234620/0, in_queue=1234620, util=97.77% 00:24:30.448 nvme3n1: ios=19402/0, merge=0/0, ticks=1236876/0, in_queue=1236876, util=97.83% 00:24:30.448 nvme4n1: ios=12583/0, merge=0/0, ticks=1229254/0, in_queue=1229254, util=98.16% 00:24:30.448 nvme5n1: ios=11377/0, merge=0/0, ticks=1226642/0, in_queue=1226642, util=98.32% 00:24:30.448 nvme6n1: ios=12079/0, merge=0/0, ticks=1228998/0, in_queue=1228998, util=98.44% 00:24:30.448 nvme7n1: ios=11895/0, merge=0/0, ticks=1230579/0, in_queue=1230579, util=98.89% 00:24:30.448 nvme8n1: ios=12015/0, merge=0/0, ticks=1238760/0, in_queue=1238760, util=99.10% 00:24:30.448 nvme9n1: ios=11117/0, merge=0/0, ticks=1237302/0, in_queue=1237302, util=99.22% 00:24:30.448 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:30.448 [global] 00:24:30.448 thread=1 00:24:30.448 invalidate=1 00:24:30.448 rw=randwrite 00:24:30.448 time_based=1 00:24:30.448 runtime=10 00:24:30.448 ioengine=libaio 00:24:30.448 direct=1 00:24:30.448 bs=262144 00:24:30.448 iodepth=64 00:24:30.448 norandommap=1 00:24:30.448 numjobs=1 00:24:30.448 00:24:30.448 [job0] 00:24:30.448 filename=/dev/nvme0n1 00:24:30.448 [job1] 00:24:30.448 filename=/dev/nvme10n1 00:24:30.448 [job2] 00:24:30.448 filename=/dev/nvme1n1 00:24:30.448 [job3] 00:24:30.448 filename=/dev/nvme2n1 00:24:30.448 [job4] 00:24:30.448 filename=/dev/nvme3n1 00:24:30.448 [job5] 00:24:30.448 filename=/dev/nvme4n1 00:24:30.448 [job6] 00:24:30.448 filename=/dev/nvme5n1 00:24:30.448 [job7] 00:24:30.448 filename=/dev/nvme6n1 00:24:30.448 [job8] 00:24:30.448 filename=/dev/nvme7n1 00:24:30.448 [job9] 00:24:30.448 filename=/dev/nvme8n1 00:24:30.448 [job10] 00:24:30.448 filename=/dev/nvme9n1 00:24:30.448 Could not set queue depth (nvme0n1) 00:24:30.448 Could not set queue depth (nvme10n1) 00:24:30.448 Could not set queue depth (nvme1n1) 00:24:30.448 Could not set queue depth (nvme2n1) 00:24:30.448 Could not set queue depth (nvme3n1) 00:24:30.448 Could not set queue depth (nvme4n1) 00:24:30.448 Could not set queue depth (nvme5n1) 00:24:30.448 Could not set queue depth (nvme6n1) 00:24:30.448 Could not set queue depth (nvme7n1) 00:24:30.448 Could not set queue depth (nvme8n1) 00:24:30.448 Could not set queue depth (nvme9n1) 00:24:30.448 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:30.448 fio-3.35 00:24:30.448 Starting 11 threads 00:24:40.412 00:24:40.412 job0: (groupid=0, jobs=1): err= 0: pid=887607: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=508, BW=127MiB/s (133MB/s)(1290MiB/10145msec); 0 zone resets 00:24:40.412 slat (usec): min=24, max=110889, avg=1359.42, stdev=3948.79 00:24:40.412 clat (usec): min=1568, max=384757, avg=124461.19, stdev=59791.16 00:24:40.412 lat (usec): min=1614, max=384799, avg=125820.62, stdev=60513.42 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 73], 00:24:40.412 | 30.00th=[ 96], 40.00th=[ 113], 50.00th=[ 125], 60.00th=[ 140], 00:24:40.412 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 197], 95.00th=[ 226], 00:24:40.412 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 372], 99.95th=[ 380], 00:24:40.412 | 99.99th=[ 384] 00:24:40.412 bw ( KiB/s): min=76800, max=211968, per=9.01%, avg=130432.00, stdev=36674.46, samples=20 00:24:40.412 iops : min= 300, max= 828, avg=509.50, stdev=143.26, samples=20 00:24:40.412 lat (msec) : 2=0.06%, 4=0.25%, 10=1.11%, 20=1.78%, 50=10.00% 00:24:40.412 lat (msec) : 100=19.29%, 250=65.06%, 500=2.44% 00:24:40.412 cpu : usr=1.77%, sys=1.98%, ctx=2863, majf=0, minf=1 00:24:40.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:40.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.412 issued rwts: total=0,5158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.412 job1: (groupid=0, jobs=1): err= 0: pid=887619: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=492, BW=123MiB/s (129MB/s)(1249MiB/10148msec); 0 zone resets 00:24:40.412 slat (usec): min=18, max=93029, avg=1391.24, stdev=4327.55 00:24:40.412 clat (usec): min=1073, max=373449, avg=128484.25, stdev=78111.36 00:24:40.412 lat (usec): min=1114, max=373504, avg=129875.49, stdev=79040.42 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 57], 00:24:40.412 | 30.00th=[ 71], 40.00th=[ 93], 50.00th=[ 125], 60.00th=[ 144], 00:24:40.412 | 70.00th=[ 167], 80.00th=[ 199], 90.00th=[ 241], 95.00th=[ 271], 00:24:40.412 | 99.00th=[ 321], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:24:40.412 | 99.99th=[ 376] 00:24:40.412 bw ( KiB/s): min=61440, max=240640, per=8.73%, avg=126310.40, stdev=48528.28, samples=20 00:24:40.412 iops : min= 240, max= 940, avg=493.40, stdev=189.56, samples=20 00:24:40.412 lat (msec) : 2=0.18%, 4=0.40%, 10=1.18%, 20=2.70%, 50=12.79% 00:24:40.412 lat (msec) : 100=25.20%, 250=48.91%, 500=8.65% 00:24:40.412 cpu : usr=1.69%, sys=1.90%, ctx=2812, majf=0, minf=1 00:24:40.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:40.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.412 issued rwts: total=0,4997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.412 job2: (groupid=0, jobs=1): err= 0: pid=887620: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=515, BW=129MiB/s (135MB/s)(1307MiB/10147msec); 0 zone resets 00:24:40.412 slat (usec): min=15, max=81825, avg=1301.20, stdev=4193.52 00:24:40.412 clat (msec): min=2, max=340, avg=122.85, stdev=76.33 00:24:40.412 lat (msec): min=2, max=340, avg=124.15, stdev=77.42 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 54], 00:24:40.412 | 30.00th=[ 79], 40.00th=[ 94], 50.00th=[ 113], 60.00th=[ 130], 00:24:40.412 | 70.00th=[ 150], 80.00th=[ 176], 90.00th=[ 241], 95.00th=[ 284], 00:24:40.412 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 342], 00:24:40.412 | 99.99th=[ 342] 00:24:40.412 bw ( KiB/s): min=51200, max=223232, per=9.14%, avg=132230.90, stdev=54432.35, samples=20 00:24:40.412 iops : min= 200, max= 872, avg=516.50, stdev=212.66, samples=20 00:24:40.412 lat (msec) : 4=0.44%, 10=1.34%, 20=3.42%, 50=13.12%, 100=24.23% 00:24:40.412 lat (msec) : 250=49.16%, 500=8.28% 00:24:40.412 cpu : usr=1.76%, sys=1.92%, ctx=3242, majf=0, minf=1 00:24:40.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:40.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.412 issued rwts: total=0,5228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.412 job3: (groupid=0, jobs=1): err= 0: pid=887621: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=544, BW=136MiB/s (143MB/s)(1381MiB/10147msec); 0 zone resets 00:24:40.412 slat (usec): min=20, max=95836, avg=1313.76, stdev=3792.38 00:24:40.412 clat (usec): min=1414, max=335480, avg=116181.73, stdev=68074.11 00:24:40.412 lat (usec): min=1551, max=335510, avg=117495.49, stdev=68951.15 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 56], 00:24:40.412 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 106], 60.00th=[ 118], 00:24:40.412 | 70.00th=[ 146], 80.00th=[ 182], 90.00th=[ 211], 95.00th=[ 236], 00:24:40.412 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 334], 00:24:40.412 | 99.99th=[ 334] 00:24:40.412 bw ( KiB/s): min=69632, max=219136, per=9.66%, avg=139759.85, stdev=45710.26, samples=20 00:24:40.412 iops : min= 272, max= 856, avg=545.90, stdev=178.59, samples=20 00:24:40.412 lat (msec) : 2=0.05%, 4=0.33%, 10=1.88%, 20=2.75%, 50=11.48% 00:24:40.412 lat (msec) : 100=30.97%, 250=48.64%, 500=3.89% 00:24:40.412 cpu : usr=1.69%, sys=1.84%, ctx=3035, majf=0, minf=1 00:24:40.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:40.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.412 issued rwts: total=0,5522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.412 job4: (groupid=0, jobs=1): err= 0: pid=887622: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=509, BW=127MiB/s (134MB/s)(1293MiB/10146msec); 0 zone resets 00:24:40.412 slat (usec): min=23, max=55320, avg=1669.65, stdev=3757.36 00:24:40.412 clat (msec): min=2, max=324, avg=123.83, stdev=62.85 00:24:40.412 lat (msec): min=2, max=338, avg=125.50, stdev=63.58 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 70], 00:24:40.412 | 30.00th=[ 80], 40.00th=[ 101], 50.00th=[ 123], 60.00th=[ 144], 00:24:40.412 | 70.00th=[ 159], 80.00th=[ 171], 90.00th=[ 203], 95.00th=[ 232], 00:24:40.412 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 321], 00:24:40.412 | 99.99th=[ 326] 00:24:40.412 bw ( KiB/s): min=69120, max=294400, per=9.04%, avg=130782.65, stdev=53529.48, samples=20 00:24:40.412 iops : min= 270, max= 1150, avg=510.85, stdev=209.08, samples=20 00:24:40.412 lat (msec) : 4=0.04%, 10=0.81%, 20=2.15%, 50=10.98%, 100=25.91% 00:24:40.412 lat (msec) : 250=56.45%, 500=3.65% 00:24:40.412 cpu : usr=1.84%, sys=1.77%, ctx=1981, majf=0, minf=1 00:24:40.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:40.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.412 issued rwts: total=0,5171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.412 job5: (groupid=0, jobs=1): err= 0: pid=887623: Thu Jul 25 04:07:54 2024 00:24:40.412 write: IOPS=594, BW=149MiB/s (156MB/s)(1510MiB/10150msec); 0 zone resets 00:24:40.412 slat (usec): min=18, max=80390, avg=1363.36, stdev=3760.78 00:24:40.412 clat (usec): min=1180, max=351326, avg=106146.02, stdev=72411.39 00:24:40.412 lat (usec): min=1255, max=351366, avg=107509.38, stdev=73413.05 00:24:40.412 clat percentiles (msec): 00:24:40.412 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 48], 00:24:40.412 | 30.00th=[ 54], 40.00th=[ 77], 50.00th=[ 90], 60.00th=[ 103], 00:24:40.412 | 70.00th=[ 116], 80.00th=[ 144], 90.00th=[ 211], 95.00th=[ 288], 00:24:40.412 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 351], 00:24:40.413 | 99.99th=[ 351] 00:24:40.413 bw ( KiB/s): min=47104, max=330240, per=10.57%, avg=153005.80, stdev=80394.73, samples=20 00:24:40.413 iops : min= 184, max= 1290, avg=597.65, stdev=314.02, samples=20 00:24:40.413 lat (msec) : 2=0.05%, 4=0.23%, 10=0.53%, 20=1.39%, 50=23.75% 00:24:40.413 lat (msec) : 100=32.07%, 250=34.79%, 500=7.19% 00:24:40.413 cpu : usr=2.00%, sys=2.14%, ctx=2426, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,6039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 job6: (groupid=0, jobs=1): err= 0: pid=887624: Thu Jul 25 04:07:54 2024 00:24:40.413 write: IOPS=533, BW=133MiB/s (140MB/s)(1352MiB/10147msec); 0 zone resets 00:24:40.413 slat (usec): min=21, max=101626, avg=1408.06, stdev=3784.05 00:24:40.413 clat (usec): min=1986, max=328678, avg=118585.32, stdev=61059.87 00:24:40.413 lat (msec): min=2, max=328, avg=119.99, stdev=61.78 00:24:40.413 clat percentiles (msec): 00:24:40.413 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 75], 00:24:40.413 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 106], 60.00th=[ 120], 00:24:40.413 | 70.00th=[ 146], 80.00th=[ 176], 90.00th=[ 209], 95.00th=[ 232], 00:24:40.413 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 321], 00:24:40.413 | 99.99th=[ 330] 00:24:40.413 bw ( KiB/s): min=69632, max=241664, per=9.46%, avg=136864.95, stdev=45287.75, samples=20 00:24:40.413 iops : min= 272, max= 944, avg=534.60, stdev=176.95, samples=20 00:24:40.413 lat (msec) : 2=0.02%, 4=0.06%, 10=0.48%, 20=1.22%, 50=9.69% 00:24:40.413 lat (msec) : 100=35.29%, 250=51.01%, 500=2.24% 00:24:40.413 cpu : usr=1.64%, sys=2.33%, ctx=2696, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,5409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 job7: (groupid=0, jobs=1): err= 0: pid=887625: Thu Jul 25 04:07:54 2024 00:24:40.413 write: IOPS=573, BW=143MiB/s (150MB/s)(1442MiB/10051msec); 0 zone resets 00:24:40.413 slat (usec): min=23, max=95920, avg=1085.19, stdev=3378.72 00:24:40.413 clat (msec): min=2, max=333, avg=110.37, stdev=61.54 00:24:40.413 lat (msec): min=2, max=335, avg=111.46, stdev=62.21 00:24:40.413 clat percentiles (msec): 00:24:40.413 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 52], 00:24:40.413 | 30.00th=[ 64], 40.00th=[ 92], 50.00th=[ 111], 60.00th=[ 122], 00:24:40.413 | 70.00th=[ 142], 80.00th=[ 165], 90.00th=[ 192], 95.00th=[ 215], 00:24:40.413 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 321], 99.95th=[ 330], 00:24:40.413 | 99.99th=[ 334] 00:24:40.413 bw ( KiB/s): min=92160, max=280576, per=10.10%, avg=146084.90, stdev=43436.64, samples=20 00:24:40.413 iops : min= 360, max= 1096, avg=570.60, stdev=169.71, samples=20 00:24:40.413 lat (msec) : 4=0.12%, 10=0.97%, 20=3.36%, 50=13.94%, 100=24.46% 00:24:40.413 lat (msec) : 250=55.36%, 500=1.79% 00:24:40.413 cpu : usr=1.99%, sys=2.18%, ctx=3463, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,5769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 job8: (groupid=0, jobs=1): err= 0: pid=887626: Thu Jul 25 04:07:54 2024 00:24:40.413 write: IOPS=429, BW=107MiB/s (113MB/s)(1084MiB/10090msec); 0 zone resets 00:24:40.413 slat (usec): min=26, max=68123, avg=1984.47, stdev=4691.09 00:24:40.413 clat (msec): min=5, max=318, avg=146.95, stdev=66.28 00:24:40.413 lat (msec): min=8, max=318, avg=148.93, stdev=67.16 00:24:40.413 clat percentiles (msec): 00:24:40.413 | 1.00th=[ 19], 5.00th=[ 38], 10.00th=[ 60], 20.00th=[ 95], 00:24:40.413 | 30.00th=[ 111], 40.00th=[ 121], 50.00th=[ 142], 60.00th=[ 163], 00:24:40.413 | 70.00th=[ 182], 80.00th=[ 205], 90.00th=[ 234], 95.00th=[ 271], 00:24:40.413 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 317], 00:24:40.413 | 99.99th=[ 317] 00:24:40.413 bw ( KiB/s): min=61440, max=164352, per=7.56%, avg=109337.60, stdev=33887.60, samples=20 00:24:40.413 iops : min= 240, max= 642, avg=427.10, stdev=132.37, samples=20 00:24:40.413 lat (msec) : 10=0.07%, 20=1.11%, 50=6.88%, 100=14.12%, 250=70.14% 00:24:40.413 lat (msec) : 500=7.68% 00:24:40.413 cpu : usr=1.37%, sys=1.68%, ctx=1894, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,4334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 job9: (groupid=0, jobs=1): err= 0: pid=887627: Thu Jul 25 04:07:54 2024 00:24:40.413 write: IOPS=390, BW=97.5MiB/s (102MB/s)(990MiB/10146msec); 0 zone resets 00:24:40.413 slat (usec): min=23, max=160642, avg=2188.84, stdev=6678.75 00:24:40.413 clat (msec): min=4, max=344, avg=161.67, stdev=70.45 00:24:40.413 lat (msec): min=6, max=353, avg=163.86, stdev=71.37 00:24:40.413 clat percentiles (msec): 00:24:40.413 | 1.00th=[ 22], 5.00th=[ 53], 10.00th=[ 75], 20.00th=[ 106], 00:24:40.413 | 30.00th=[ 117], 40.00th=[ 127], 50.00th=[ 153], 60.00th=[ 178], 00:24:40.413 | 70.00th=[ 207], 80.00th=[ 226], 90.00th=[ 262], 95.00th=[ 284], 00:24:40.413 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 347], 00:24:40.413 | 99.99th=[ 347] 00:24:40.413 bw ( KiB/s): min=57344, max=146944, per=6.89%, avg=99747.25, stdev=29197.70, samples=20 00:24:40.413 iops : min= 224, max= 574, avg=389.60, stdev=114.06, samples=20 00:24:40.413 lat (msec) : 10=0.10%, 20=0.73%, 50=3.74%, 100=12.78%, 250=69.69% 00:24:40.413 lat (msec) : 500=12.96% 00:24:40.413 cpu : usr=1.41%, sys=1.29%, ctx=1530, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,3959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 job10: (groupid=0, jobs=1): err= 0: pid=887628: Thu Jul 25 04:07:54 2024 00:24:40.413 write: IOPS=574, BW=144MiB/s (151MB/s)(1447MiB/10070msec); 0 zone resets 00:24:40.413 slat (usec): min=24, max=137519, avg=1215.74, stdev=4247.33 00:24:40.413 clat (usec): min=1269, max=295987, avg=109983.14, stdev=56012.45 00:24:40.413 lat (usec): min=1306, max=296063, avg=111198.88, stdev=56537.19 00:24:40.413 clat percentiles (msec): 00:24:40.413 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 47], 20.00th=[ 65], 00:24:40.413 | 30.00th=[ 75], 40.00th=[ 91], 50.00th=[ 104], 60.00th=[ 112], 00:24:40.413 | 70.00th=[ 130], 80.00th=[ 157], 90.00th=[ 188], 95.00th=[ 215], 00:24:40.413 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 296], 00:24:40.413 | 99.99th=[ 296] 00:24:40.413 bw ( KiB/s): min=79360, max=241152, per=10.12%, avg=146508.80, stdev=40713.79, samples=20 00:24:40.413 iops : min= 310, max= 942, avg=572.30, stdev=159.04, samples=20 00:24:40.413 lat (msec) : 2=0.05%, 4=0.09%, 10=0.71%, 20=1.59%, 50=8.33% 00:24:40.413 lat (msec) : 100=36.17%, 250=50.90%, 500=2.16% 00:24:40.413 cpu : usr=1.91%, sys=2.17%, ctx=2956, majf=0, minf=1 00:24:40.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.413 issued rwts: total=0,5786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.413 00:24:40.413 Run status group 0 (all jobs): 00:24:40.413 WRITE: bw=1413MiB/s (1482MB/s), 97.5MiB/s-149MiB/s (102MB/s-156MB/s), io=14.0GiB (15.0GB), run=10051-10150msec 00:24:40.413 00:24:40.413 Disk stats (read/write): 00:24:40.413 nvme0n1: ios=51/10130, merge=0/0, ticks=979/1211504, in_queue=1212483, util=99.87% 00:24:40.413 nvme10n1: ios=48/9811, merge=0/0, ticks=1621/1219692, in_queue=1221313, util=100.00% 00:24:40.413 nvme1n1: ios=0/10272, merge=0/0, ticks=0/1219590, in_queue=1219590, util=97.56% 00:24:40.413 nvme2n1: ios=45/10859, merge=0/0, ticks=2001/1216481, in_queue=1218482, util=100.00% 00:24:40.413 nvme3n1: ios=13/10160, merge=0/0, ticks=13/1209967, in_queue=1209980, util=97.82% 00:24:40.413 nvme4n1: ios=0/11894, merge=0/0, ticks=0/1208701, in_queue=1208701, util=98.16% 00:24:40.413 nvme5n1: ios=0/10647, merge=0/0, ticks=0/1213541, in_queue=1213541, util=98.31% 00:24:40.413 nvme6n1: ios=0/11242, merge=0/0, ticks=0/1227670, in_queue=1227670, util=98.42% 00:24:40.413 nvme7n1: ios=0/8446, merge=0/0, ticks=0/1209604, in_queue=1209604, util=98.81% 00:24:40.413 nvme8n1: ios=40/7740, merge=0/0, ticks=1901/1168973, in_queue=1170874, util=100.00% 00:24:40.413 nvme9n1: ios=43/11331, merge=0/0, ticks=2284/1188681, in_queue=1190965, util=100.00% 00:24:40.413 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:40.413 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:40.413 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.413 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:40.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:40.413 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:40.413 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:40.413 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:40.413 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:40.413 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:40.414 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:40.414 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.414 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:40.672 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.672 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:40.930 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.930 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:41.188 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:41.188 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:41.189 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.189 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:41.447 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:41.447 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.447 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:41.705 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:41.705 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.705 rmmod nvme_tcp 00:24:41.705 rmmod nvme_fabrics 00:24:41.705 rmmod nvme_keyring 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 882183 ']' 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 882183 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 882183 ']' 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 882183 00:24:41.705 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:41.705 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.705 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882183 00:24:41.962 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.962 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.962 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882183' 00:24:41.962 killing process with pid 882183 00:24:41.962 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 882183 00:24:41.962 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 882183 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.527 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.423 00:24:44.423 real 1m0.268s 00:24:44.423 user 3m22.851s 00:24:44.423 sys 0m24.535s 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:44.423 ************************************ 00:24:44.423 END TEST nvmf_multiconnection 00:24:44.423 ************************************ 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:44.423 ************************************ 00:24:44.423 START TEST nvmf_initiator_timeout 00:24:44.423 ************************************ 00:24:44.423 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:44.423 * Looking for test storage... 00:24:44.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.681 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.682 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:46.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.584 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:46.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:46.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:46.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:46.585 00:24:46.585 --- 10.0.0.2 ping statistics --- 00:24:46.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.585 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:24:46.585 00:24:46.585 --- 10.0.0.1 ping statistics --- 00:24:46.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.585 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.585 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.586 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.586 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.586 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.586 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=890940 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 890940 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 890940 ']' 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.879 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 [2024-07-25 04:08:01.945187] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:24:46.879 [2024-07-25 04:08:01.945298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.879 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.879 [2024-07-25 04:08:01.986149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:46.879 [2024-07-25 04:08:02.013216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.879 [2024-07-25 04:08:02.101462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.879 [2024-07-25 04:08:02.101513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.879 [2024-07-25 04:08:02.101538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.879 [2024-07-25 04:08:02.101549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.879 [2024-07-25 04:08:02.101558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.879 [2024-07-25 04:08:02.101624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.879 [2024-07-25 04:08:02.101654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.879 [2024-07-25 04:08:02.101710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.879 [2024-07-25 04:08:02.101713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 Malloc0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 Delay0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 [2024-07-25 04:08:02.296262] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 [2024-07-25 04:08:02.324588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.137 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:48.068 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:48.068 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:48.068 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.068 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:48.068 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=891331 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:49.963 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:49.963 [global] 00:24:49.963 thread=1 00:24:49.963 invalidate=1 00:24:49.963 rw=write 00:24:49.963 time_based=1 00:24:49.963 runtime=60 00:24:49.963 ioengine=libaio 00:24:49.963 direct=1 00:24:49.963 bs=4096 00:24:49.963 iodepth=1 00:24:49.963 norandommap=0 00:24:49.963 numjobs=1 00:24:49.963 00:24:49.963 verify_dump=1 00:24:49.963 verify_backlog=512 00:24:49.963 verify_state_save=0 00:24:49.963 do_verify=1 00:24:49.963 verify=crc32c-intel 00:24:49.963 [job0] 00:24:49.963 filename=/dev/nvme0n1 00:24:49.963 Could not set queue depth (nvme0n1) 00:24:49.963 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:49.963 fio-3.35 00:24:49.963 Starting 1 thread 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.239 true 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.239 true 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.239 true 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.239 true 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.239 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.940 true 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.940 true 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.940 true 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.940 true 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:55.940 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 891331 00:25:52.145 00:25:52.145 job0: (groupid=0, jobs=1): err= 0: pid=891446: Thu Jul 25 04:09:05 2024 00:25:52.145 read: IOPS=66, BW=265KiB/s (271kB/s)(15.5MiB/60019msec) 00:25:52.145 slat (nsec): min=5393, max=53414, avg=10595.64, stdev=6654.93 00:25:52.145 clat (usec): min=282, max=40956k, avg=14797.82, stdev=649651.06 00:25:52.145 lat (usec): min=288, max=40956k, avg=14808.41, stdev=649651.18 00:25:52.145 clat percentiles (usec): 00:25:52.145 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 00:25:52.145 | 20.00th=[ 306], 30.00th=[ 310], 40.00th=[ 314], 00:25:52.145 | 50.00th=[ 326], 60.00th=[ 338], 70.00th=[ 347], 00:25:52.145 | 80.00th=[ 359], 90.00th=[ 40633], 95.00th=[ 41681], 00:25:52.145 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:25:52.145 | 99.95th=[ 43779], 99.99th=[17112761] 00:25:52.145 write: IOPS=68, BW=273KiB/s (280kB/s)(16.0MiB/60019msec); 0 zone resets 00:25:52.145 slat (usec): min=6, max=9740, avg=18.84, stdev=203.40 00:25:52.145 clat (usec): min=200, max=1666, avg=255.89, stdev=52.78 00:25:52.145 lat (usec): min=207, max=9990, avg=274.72, stdev=211.30 00:25:52.145 clat percentiles (usec): 00:25:52.145 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 223], 00:25:52.145 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 249], 00:25:52.145 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 363], 00:25:52.145 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 465], 99.95th=[ 502], 00:25:52.145 | 99.99th=[ 1663] 00:25:52.145 bw ( KiB/s): min= 624, max= 8192, per=100.00%, avg=5461.33, stdev=2768.26, samples=6 00:25:52.145 iops : min= 156, max= 2048, avg=1365.33, stdev=692.07, samples=6 00:25:52.145 lat (usec) : 250=31.05%, 500=63.80%, 750=0.06%, 1000=0.06% 00:25:52.145 lat (msec) : 2=0.04%, 50=4.98%, >=2000=0.01% 00:25:52.145 cpu : usr=0.13%, sys=0.23%, ctx=8075, majf=0, minf=2 00:25:52.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.145 issued rwts: total=3975,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:52.145 00:25:52.146 Run status group 0 (all jobs): 00:25:52.146 READ: bw=265KiB/s (271kB/s), 265KiB/s-265KiB/s (271kB/s-271kB/s), io=15.5MiB (16.3MB), run=60019-60019msec 00:25:52.146 WRITE: bw=273KiB/s (280kB/s), 273KiB/s-273KiB/s (280kB/s-280kB/s), io=16.0MiB (16.8MB), run=60019-60019msec 00:25:52.146 00:25:52.146 Disk stats (read/write): 00:25:52.146 nvme0n1: ios=4071/4096, merge=0/0, ticks=18989/1012, in_queue=20001, util=99.84% 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:52.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:52.146 nvmf hotplug test: fio successful as expected 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.146 rmmod nvme_tcp 00:25:52.146 rmmod nvme_fabrics 00:25:52.146 rmmod nvme_keyring 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 890940 ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 890940 ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890940' 00:25:52.146 killing process with pid 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 890940 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.146 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.712 00:25:52.712 real 1m8.232s 00:25:52.712 user 4m10.904s 00:25:52.712 sys 0m6.597s 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.712 ************************************ 00:25:52.712 END TEST nvmf_initiator_timeout 00:25:52.712 ************************************ 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.712 04:09:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:54.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:54.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:54.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:54.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:54.613 04:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:54.614 04:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:54.614 04:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 ************************************ 00:25:54.614 START TEST nvmf_perf_adq 00:25:54.614 ************************************ 00:25:54.614 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:54.872 * Looking for test storage... 00:25:54.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.872 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:56.772 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:56.772 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:56.772 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:56.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:56.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:56.773 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:57.340 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:59.240 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.505 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:26:04.506 00:26:04.506 --- 10.0.0.2 ping statistics --- 00:26:04.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.506 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:26:04.506 00:26:04.506 --- 10.0.0.1 ping statistics --- 00:26:04.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.506 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=903579 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 903579 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 903579 ']' 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.506 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.506 [2024-07-25 04:09:19.671194] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:04.506 [2024-07-25 04:09:19.671290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.506 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.506 [2024-07-25 04:09:19.711205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:04.506 [2024-07-25 04:09:19.741476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.764 [2024-07-25 04:09:19.838304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.764 [2024-07-25 04:09:19.838358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.764 [2024-07-25 04:09:19.838385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.764 [2024-07-25 04:09:19.838399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.764 [2024-07-25 04:09:19.838411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.764 [2024-07-25 04:09:19.838471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.764 [2024-07-25 04:09:19.838499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.764 [2024-07-25 04:09:19.838623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.764 [2024-07-25 04:09:19.838620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.764 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 [2024-07-25 04:09:20.068335] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 Malloc1 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.023 [2024-07-25 04:09:20.121643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=903610 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:05.023 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.023 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:06.922 "tick_rate": 2700000000, 00:26:06.922 "poll_groups": [ 00:26:06.922 { 00:26:06.922 "name": "nvmf_tgt_poll_group_000", 00:26:06.922 "admin_qpairs": 1, 00:26:06.922 "io_qpairs": 1, 00:26:06.922 "current_admin_qpairs": 1, 00:26:06.922 "current_io_qpairs": 1, 00:26:06.922 "pending_bdev_io": 0, 00:26:06.922 "completed_nvme_io": 20789, 00:26:06.922 "transports": [ 00:26:06.922 { 00:26:06.922 "trtype": "TCP" 00:26:06.922 } 00:26:06.922 ] 00:26:06.922 }, 00:26:06.922 { 00:26:06.922 "name": "nvmf_tgt_poll_group_001", 00:26:06.922 "admin_qpairs": 0, 00:26:06.922 "io_qpairs": 1, 00:26:06.922 "current_admin_qpairs": 0, 00:26:06.922 "current_io_qpairs": 1, 00:26:06.922 "pending_bdev_io": 0, 00:26:06.922 "completed_nvme_io": 18243, 00:26:06.922 "transports": [ 00:26:06.922 { 00:26:06.922 "trtype": "TCP" 00:26:06.922 } 00:26:06.922 ] 00:26:06.922 }, 00:26:06.922 { 00:26:06.922 "name": "nvmf_tgt_poll_group_002", 00:26:06.922 "admin_qpairs": 0, 00:26:06.922 "io_qpairs": 1, 00:26:06.922 "current_admin_qpairs": 0, 00:26:06.922 "current_io_qpairs": 1, 00:26:06.922 "pending_bdev_io": 0, 00:26:06.922 "completed_nvme_io": 21210, 00:26:06.922 "transports": [ 00:26:06.922 { 00:26:06.922 "trtype": "TCP" 00:26:06.922 } 00:26:06.922 ] 00:26:06.922 }, 00:26:06.922 { 00:26:06.922 "name": "nvmf_tgt_poll_group_003", 00:26:06.922 "admin_qpairs": 0, 00:26:06.922 "io_qpairs": 1, 00:26:06.922 "current_admin_qpairs": 0, 00:26:06.922 "current_io_qpairs": 1, 00:26:06.922 "pending_bdev_io": 0, 00:26:06.922 "completed_nvme_io": 19733, 00:26:06.922 "transports": [ 00:26:06.922 { 00:26:06.922 "trtype": "TCP" 00:26:06.922 } 00:26:06.922 ] 00:26:06.922 } 00:26:06.922 ] 00:26:06.922 }' 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:06.922 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 903610 00:26:15.024 Initializing NVMe Controllers 00:26:15.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:15.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:15.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:15.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:15.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:15.024 Initialization complete. Launching workers. 00:26:15.024 ======================================================== 00:26:15.024 Latency(us) 00:26:15.024 Device Information : IOPS MiB/s Average min max 00:26:15.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11161.90 43.60 5733.16 2293.46 7474.27 00:26:15.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9639.90 37.66 6641.28 2490.12 10462.98 00:26:15.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10400.70 40.63 6154.45 2122.86 9333.58 00:26:15.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10977.20 42.88 5831.60 2415.15 8806.53 00:26:15.024 ======================================================== 00:26:15.024 Total : 42179.70 164.76 6070.21 2122.86 10462.98 00:26:15.025 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.025 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.025 rmmod nvme_tcp 00:26:15.025 rmmod nvme_fabrics 00:26:15.025 rmmod nvme_keyring 00:26:15.282 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.282 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:15.282 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 903579 ']' 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 903579 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 903579 ']' 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 903579 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903579 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903579' 00:26:15.283 killing process with pid 903579 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 903579 00:26:15.283 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 903579 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.540 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.445 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.445 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:17.445 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:18.397 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:20.294 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:25.559 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:25.559 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.559 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:25.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:25.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:25.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:26:25.560 00:26:25.560 --- 10.0.0.2 ping statistics --- 00:26:25.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.560 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:26:25.560 00:26:25.560 --- 10.0.0.1 ping statistics --- 00:26:25.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.560 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:25.560 net.core.busy_poll = 1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:25.560 net.core.busy_read = 1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=906216 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 906216 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 906216 ']' 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.560 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.560 [2024-07-25 04:09:40.717758] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:25.560 [2024-07-25 04:09:40.717860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.560 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.560 [2024-07-25 04:09:40.756197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:25.560 [2024-07-25 04:09:40.788472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.818 [2024-07-25 04:09:40.885341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.818 [2024-07-25 04:09:40.885392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.818 [2024-07-25 04:09:40.885419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.818 [2024-07-25 04:09:40.885433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.819 [2024-07-25 04:09:40.885444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.819 [2024-07-25 04:09:40.885503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.819 [2024-07-25 04:09:40.885584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.819 [2024-07-25 04:09:40.885882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.819 [2024-07-25 04:09:40.885886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.819 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.819 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 [2024-07-25 04:09:41.126854] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 Malloc1 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.077 [2024-07-25 04:09:41.177987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=906364 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:26.077 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:26.077 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:27.976 "tick_rate": 2700000000, 00:26:27.976 "poll_groups": [ 00:26:27.976 { 00:26:27.976 "name": "nvmf_tgt_poll_group_000", 00:26:27.976 "admin_qpairs": 1, 00:26:27.976 "io_qpairs": 1, 00:26:27.976 "current_admin_qpairs": 1, 00:26:27.976 "current_io_qpairs": 1, 00:26:27.976 "pending_bdev_io": 0, 00:26:27.976 "completed_nvme_io": 22161, 00:26:27.976 "transports": [ 00:26:27.976 { 00:26:27.976 "trtype": "TCP" 00:26:27.976 } 00:26:27.976 ] 00:26:27.976 }, 00:26:27.976 { 00:26:27.976 "name": "nvmf_tgt_poll_group_001", 00:26:27.976 "admin_qpairs": 0, 00:26:27.976 "io_qpairs": 3, 00:26:27.976 "current_admin_qpairs": 0, 00:26:27.976 "current_io_qpairs": 3, 00:26:27.976 "pending_bdev_io": 0, 00:26:27.976 "completed_nvme_io": 27924, 00:26:27.976 "transports": [ 00:26:27.976 { 00:26:27.976 "trtype": "TCP" 00:26:27.976 } 00:26:27.976 ] 00:26:27.976 }, 00:26:27.976 { 00:26:27.976 "name": "nvmf_tgt_poll_group_002", 00:26:27.976 "admin_qpairs": 0, 00:26:27.976 "io_qpairs": 0, 00:26:27.976 "current_admin_qpairs": 0, 00:26:27.976 "current_io_qpairs": 0, 00:26:27.976 "pending_bdev_io": 0, 00:26:27.976 "completed_nvme_io": 0, 00:26:27.976 "transports": [ 00:26:27.976 { 00:26:27.976 "trtype": "TCP" 00:26:27.976 } 00:26:27.976 ] 00:26:27.976 }, 00:26:27.976 { 00:26:27.976 "name": "nvmf_tgt_poll_group_003", 00:26:27.976 "admin_qpairs": 0, 00:26:27.976 "io_qpairs": 0, 00:26:27.976 "current_admin_qpairs": 0, 00:26:27.976 "current_io_qpairs": 0, 00:26:27.976 "pending_bdev_io": 0, 00:26:27.976 "completed_nvme_io": 0, 00:26:27.976 "transports": [ 00:26:27.976 { 00:26:27.976 "trtype": "TCP" 00:26:27.976 } 00:26:27.976 ] 00:26:27.976 } 00:26:27.976 ] 00:26:27.976 }' 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:27.976 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 906364 00:26:36.076 Initializing NVMe Controllers 00:26:36.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:36.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:36.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:36.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:36.076 Initialization complete. Launching workers. 00:26:36.076 ======================================================== 00:26:36.076 Latency(us) 00:26:36.076 Device Information : IOPS MiB/s Average min max 00:26:36.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11993.10 46.85 5336.39 2396.87 7667.17 00:26:36.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4455.60 17.40 14371.33 2043.62 61222.31 00:26:36.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5120.20 20.00 12504.60 1906.06 60767.90 00:26:36.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5016.60 19.60 12761.93 1968.29 59134.36 00:26:36.076 ======================================================== 00:26:36.076 Total : 26585.50 103.85 9632.33 1906.06 61222.31 00:26:36.076 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.076 rmmod nvme_tcp 00:26:36.076 rmmod nvme_fabrics 00:26:36.076 rmmod nvme_keyring 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 906216 ']' 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 906216 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 906216 ']' 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 906216 00:26:36.076 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:36.334 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.334 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 906216 00:26:36.334 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.334 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.334 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 906216' 00:26:36.334 killing process with pid 906216 00:26:36.335 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 906216 00:26:36.335 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 906216 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.593 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.876 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.876 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:39.876 00:26:39.876 real 0m44.815s 00:26:39.876 user 2m31.317s 00:26:39.876 sys 0m12.509s 00:26:39.876 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.876 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.876 ************************************ 00:26:39.876 END TEST nvmf_perf_adq 00:26:39.876 ************************************ 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:39.877 ************************************ 00:26:39.877 START TEST nvmf_shutdown 00:26:39.877 ************************************ 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:39.877 * Looking for test storage... 00:26:39.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:39.877 ************************************ 00:26:39.877 START TEST nvmf_shutdown_tc1 00:26:39.877 ************************************ 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.877 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:41.786 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:41.786 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.786 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:41.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:41.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:26:41.787 00:26:41.787 --- 10.0.0.2 ping statistics --- 00:26:41.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.787 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:26:41.787 00:26:41.787 --- 10.0.0.1 ping statistics --- 00:26:41.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.787 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=909651 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 909651 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 909651 ']' 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.787 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:41.787 [2024-07-25 04:09:57.026188] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:41.787 [2024-07-25 04:09:57.026290] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.787 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.787 [2024-07-25 04:09:57.063124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:42.046 [2024-07-25 04:09:57.090101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.046 [2024-07-25 04:09:57.176876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.046 [2024-07-25 04:09:57.176921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.046 [2024-07-25 04:09:57.176950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.046 [2024-07-25 04:09:57.176963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.046 [2024-07-25 04:09:57.176972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.046 [2024-07-25 04:09:57.177073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.046 [2024-07-25 04:09:57.177131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.046 [2024-07-25 04:09:57.177466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:42.046 [2024-07-25 04:09:57.177470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.046 [2024-07-25 04:09:57.317393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.046 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.304 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.304 Malloc1 00:26:42.304 [2024-07-25 04:09:57.392198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.304 Malloc2 00:26:42.304 Malloc3 00:26:42.304 Malloc4 00:26:42.304 Malloc5 00:26:42.562 Malloc6 00:26:42.562 Malloc7 00:26:42.562 Malloc8 00:26:42.562 Malloc9 00:26:42.562 Malloc10 00:26:42.562 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.562 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:42.562 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.562 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=909713 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 909713 /var/tmp/bdevperf.sock 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 909713 ']' 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.821 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.821 { 00:26:42.821 "params": { 00:26:42.821 "name": "Nvme$subsystem", 00:26:42.821 "trtype": "$TEST_TRANSPORT", 00:26:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.821 "adrfam": "ipv4", 00:26:42.821 "trsvcid": "$NVMF_PORT", 00:26:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.821 "hdgst": ${hdgst:-false}, 00:26:42.821 "ddgst": ${ddgst:-false} 00:26:42.821 }, 00:26:42.821 "method": "bdev_nvme_attach_controller" 00:26:42.821 } 00:26:42.821 EOF 00:26:42.821 )") 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.822 { 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme$subsystem", 00:26:42.822 "trtype": "$TEST_TRANSPORT", 00:26:42.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "$NVMF_PORT", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.822 "hdgst": ${hdgst:-false}, 00:26:42.822 "ddgst": ${ddgst:-false} 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 } 00:26:42.822 EOF 00:26:42.822 )") 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:42.822 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme1", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme2", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme3", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme4", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme5", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme6", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme7", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme8", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme9", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 },{ 00:26:42.822 "params": { 00:26:42.822 "name": "Nvme10", 00:26:42.822 "trtype": "tcp", 00:26:42.822 "traddr": "10.0.0.2", 00:26:42.822 "adrfam": "ipv4", 00:26:42.822 "trsvcid": "4420", 00:26:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:42.822 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:42.822 "hdgst": false, 00:26:42.822 "ddgst": false 00:26:42.822 }, 00:26:42.822 "method": "bdev_nvme_attach_controller" 00:26:42.822 }' 00:26:42.822 [2024-07-25 04:09:57.906705] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:42.822 [2024-07-25 04:09:57.906779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:42.822 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.822 [2024-07-25 04:09:57.943466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:42.822 [2024-07-25 04:09:57.973721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.822 [2024-07-25 04:09:58.061953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 909713 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:44.723 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:45.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 909713 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 909651 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.656 "hdgst": ${hdgst:-false}, 00:26:45.656 "ddgst": ${ddgst:-false} 00:26:45.656 }, 00:26:45.656 "method": "bdev_nvme_attach_controller" 00:26:45.656 } 00:26:45.656 EOF 00:26:45.656 )") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.656 "hdgst": ${hdgst:-false}, 00:26:45.656 "ddgst": ${ddgst:-false} 00:26:45.656 }, 00:26:45.656 "method": "bdev_nvme_attach_controller" 00:26:45.656 } 00:26:45.656 EOF 00:26:45.656 )") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.656 "hdgst": ${hdgst:-false}, 00:26:45.656 "ddgst": ${ddgst:-false} 00:26:45.656 }, 00:26:45.656 "method": "bdev_nvme_attach_controller" 00:26:45.656 } 00:26:45.656 EOF 00:26:45.656 )") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.656 "hdgst": ${hdgst:-false}, 00:26:45.656 "ddgst": ${ddgst:-false} 00:26:45.656 }, 00:26:45.656 "method": "bdev_nvme_attach_controller" 00:26:45.656 } 00:26:45.656 EOF 00:26:45.656 )") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.656 "hdgst": ${hdgst:-false}, 00:26:45.656 "ddgst": ${ddgst:-false} 00:26:45.656 }, 00:26:45.656 "method": "bdev_nvme_attach_controller" 00:26:45.656 } 00:26:45.656 EOF 00:26:45.656 )") 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.656 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.656 { 00:26:45.656 "params": { 00:26:45.656 "name": "Nvme$subsystem", 00:26:45.656 "trtype": "$TEST_TRANSPORT", 00:26:45.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.656 "adrfam": "ipv4", 00:26:45.656 "trsvcid": "$NVMF_PORT", 00:26:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.657 "hdgst": ${hdgst:-false}, 00:26:45.657 "ddgst": ${ddgst:-false} 00:26:45.657 }, 00:26:45.657 "method": "bdev_nvme_attach_controller" 00:26:45.657 } 00:26:45.657 EOF 00:26:45.657 )") 00:26:45.657 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.657 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.657 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.657 { 00:26:45.657 "params": { 00:26:45.657 "name": "Nvme$subsystem", 00:26:45.657 "trtype": "$TEST_TRANSPORT", 00:26:45.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.657 "adrfam": "ipv4", 00:26:45.657 "trsvcid": "$NVMF_PORT", 00:26:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.657 "hdgst": ${hdgst:-false}, 00:26:45.657 "ddgst": ${ddgst:-false} 00:26:45.657 }, 00:26:45.657 "method": "bdev_nvme_attach_controller" 00:26:45.657 } 00:26:45.657 EOF 00:26:45.657 )") 00:26:45.657 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.915 { 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme$subsystem", 00:26:45.915 "trtype": "$TEST_TRANSPORT", 00:26:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "$NVMF_PORT", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.915 "hdgst": ${hdgst:-false}, 00:26:45.915 "ddgst": ${ddgst:-false} 00:26:45.915 }, 00:26:45.915 "method": "bdev_nvme_attach_controller" 00:26:45.915 } 00:26:45.915 EOF 00:26:45.915 )") 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.915 { 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme$subsystem", 00:26:45.915 "trtype": "$TEST_TRANSPORT", 00:26:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "$NVMF_PORT", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.915 "hdgst": ${hdgst:-false}, 00:26:45.915 "ddgst": ${ddgst:-false} 00:26:45.915 }, 00:26:45.915 "method": "bdev_nvme_attach_controller" 00:26:45.915 } 00:26:45.915 EOF 00:26:45.915 )") 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.915 { 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme$subsystem", 00:26:45.915 "trtype": "$TEST_TRANSPORT", 00:26:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "$NVMF_PORT", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.915 "hdgst": ${hdgst:-false}, 00:26:45.915 "ddgst": ${ddgst:-false} 00:26:45.915 }, 00:26:45.915 "method": "bdev_nvme_attach_controller" 00:26:45.915 } 00:26:45.915 EOF 00:26:45.915 )") 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:45.915 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme1", 00:26:45.915 "trtype": "tcp", 00:26:45.915 "traddr": "10.0.0.2", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "4420", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.915 "hdgst": false, 00:26:45.915 "ddgst": false 00:26:45.915 }, 00:26:45.915 "method": "bdev_nvme_attach_controller" 00:26:45.915 },{ 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme2", 00:26:45.915 "trtype": "tcp", 00:26:45.915 "traddr": "10.0.0.2", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "4420", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:45.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:45.915 "hdgst": false, 00:26:45.915 "ddgst": false 00:26:45.915 }, 00:26:45.915 "method": "bdev_nvme_attach_controller" 00:26:45.915 },{ 00:26:45.915 "params": { 00:26:45.915 "name": "Nvme3", 00:26:45.915 "trtype": "tcp", 00:26:45.915 "traddr": "10.0.0.2", 00:26:45.915 "adrfam": "ipv4", 00:26:45.915 "trsvcid": "4420", 00:26:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme4", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme5", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme6", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme7", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme8", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme9", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 },{ 00:26:45.916 "params": { 00:26:45.916 "name": "Nvme10", 00:26:45.916 "trtype": "tcp", 00:26:45.916 "traddr": "10.0.0.2", 00:26:45.916 "adrfam": "ipv4", 00:26:45.916 "trsvcid": "4420", 00:26:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:45.916 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:45.916 "hdgst": false, 00:26:45.916 "ddgst": false 00:26:45.916 }, 00:26:45.916 "method": "bdev_nvme_attach_controller" 00:26:45.916 }' 00:26:45.916 [2024-07-25 04:10:00.976634] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:45.916 [2024-07-25 04:10:00.976726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910134 ] 00:26:45.916 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.916 [2024-07-25 04:10:01.012839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.916 [2024-07-25 04:10:01.041669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.916 [2024-07-25 04:10:01.128288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.287 Running I/O for 1 seconds... 00:26:48.672 00:26:48.672 Latency(us) 00:26:48.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme1n1 : 1.13 226.60 14.16 0.00 0.00 277113.36 18447.17 253211.69 00:26:48.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme2n1 : 1.08 236.43 14.78 0.00 0.00 263110.92 20097.71 237677.23 00:26:48.672 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme3n1 : 1.09 234.49 14.66 0.00 0.00 259967.24 19515.16 253211.69 00:26:48.672 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme4n1 : 1.18 271.20 16.95 0.00 0.00 221154.57 11019.76 251658.24 00:26:48.672 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme5n1 : 1.16 219.88 13.74 0.00 0.00 268501.90 20097.71 257872.02 00:26:48.672 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme6n1 : 1.17 218.51 13.66 0.00 0.00 267178.48 21942.42 273406.48 00:26:48.672 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme7n1 : 1.19 269.81 16.86 0.00 0.00 213005.20 13107.20 246997.90 00:26:48.672 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme8n1 : 1.14 231.78 14.49 0.00 0.00 241265.80 3276.80 256318.58 00:26:48.672 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme9n1 : 1.18 216.79 13.55 0.00 0.00 255818.15 22330.79 287387.50 00:26:48.672 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.672 Verification LBA range: start 0x0 length 0x400 00:26:48.672 Nvme10n1 : 1.20 267.61 16.73 0.00 0.00 204343.68 12524.66 246997.90 00:26:48.672 =================================================================================================================== 00:26:48.672 Total : 2393.10 149.57 0.00 0.00 244742.00 3276.80 287387.50 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.930 rmmod nvme_tcp 00:26:48.930 rmmod nvme_fabrics 00:26:48.930 rmmod nvme_keyring 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 909651 ']' 00:26:48.930 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 909651 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 909651 ']' 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 909651 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 909651 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 909651' 00:26:48.931 killing process with pid 909651 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 909651 00:26:48.931 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 909651 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.496 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.398 00:26:51.398 real 0m11.771s 00:26:51.398 user 0m34.414s 00:26:51.398 sys 0m3.124s 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:51.398 ************************************ 00:26:51.398 END TEST nvmf_shutdown_tc1 00:26:51.398 ************************************ 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.398 ************************************ 00:26:51.398 START TEST nvmf_shutdown_tc2 00:26:51.398 ************************************ 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.398 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.399 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:26:51.657 00:26:51.657 --- 10.0.0.2 ping statistics --- 00:26:51.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.657 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:26:51.657 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:51.658 00:26:51.658 --- 10.0.0.1 ping statistics --- 00:26:51.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.658 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=910905 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 910905 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 910905 ']' 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:51.658 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.658 [2024-07-25 04:10:06.902726] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:51.658 [2024-07-25 04:10:06.902798] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.658 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.658 [2024-07-25 04:10:06.943438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:51.916 [2024-07-25 04:10:06.970428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.916 [2024-07-25 04:10:07.061889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.916 [2024-07-25 04:10:07.061937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.916 [2024-07-25 04:10:07.061961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.916 [2024-07-25 04:10:07.061971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.916 [2024-07-25 04:10:07.061980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.916 [2024-07-25 04:10:07.062060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.916 [2024-07-25 04:10:07.062124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.916 [2024-07-25 04:10:07.062205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.916 [2024-07-25 04:10:07.062207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.916 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.916 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:51.916 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.916 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.916 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.174 [2024-07-25 04:10:07.224744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.174 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:52.175 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:52.175 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:52.175 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.175 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.175 Malloc1 00:26:52.175 [2024-07-25 04:10:07.314352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.175 Malloc2 00:26:52.175 Malloc3 00:26:52.175 Malloc4 00:26:52.432 Malloc5 00:26:52.432 Malloc6 00:26:52.432 Malloc7 00:26:52.432 Malloc8 00:26:52.432 Malloc9 00:26:52.690 Malloc10 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=911085 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 911085 /var/tmp/bdevperf.sock 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 911085 ']' 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.690 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.691 { 00:26:52.691 "params": { 00:26:52.691 "name": "Nvme$subsystem", 00:26:52.691 "trtype": "$TEST_TRANSPORT", 00:26:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.691 "adrfam": "ipv4", 00:26:52.691 "trsvcid": "$NVMF_PORT", 00:26:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.691 "hdgst": ${hdgst:-false}, 00:26:52.691 "ddgst": ${ddgst:-false} 00:26:52.691 }, 00:26:52.691 "method": "bdev_nvme_attach_controller" 00:26:52.691 } 00:26:52.691 EOF 00:26:52.691 )") 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:52.691 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:52.692 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:52.692 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme1", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme2", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme3", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme4", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme5", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme6", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme7", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme8", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme9", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 },{ 00:26:52.692 "params": { 00:26:52.692 "name": "Nvme10", 00:26:52.692 "trtype": "tcp", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "adrfam": "ipv4", 00:26:52.692 "trsvcid": "4420", 00:26:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:52.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:52.692 "hdgst": false, 00:26:52.692 "ddgst": false 00:26:52.692 }, 00:26:52.692 "method": "bdev_nvme_attach_controller" 00:26:52.692 }' 00:26:52.692 [2024-07-25 04:10:07.840454] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:52.692 [2024-07-25 04:10:07.840534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911085 ] 00:26:52.692 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.692 [2024-07-25 04:10:07.875081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:52.692 [2024-07-25 04:10:07.903915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.950 [2024-07-25 04:10:07.990866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.320 Running I/O for 10 seconds... 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:54.582 04:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.840 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.098 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.098 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:55.098 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:55.098 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 911085 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 911085 ']' 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 911085 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911085 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911085' 00:26:55.355 killing process with pid 911085 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 911085 00:26:55.355 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 911085 00:26:55.355 Received shutdown signal, test time was about 1.074966 seconds 00:26:55.355 00:26:55.355 Latency(us) 00:26:55.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.355 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme1n1 : 1.07 238.33 14.90 0.00 0.00 265799.30 22136.60 276513.37 00:26:55.355 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme2n1 : 1.07 239.11 14.94 0.00 0.00 260344.41 21359.88 273406.48 00:26:55.355 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme3n1 : 1.05 244.96 15.31 0.00 0.00 249312.71 18252.99 268746.15 00:26:55.355 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme4n1 : 1.04 190.92 11.93 0.00 0.00 309645.59 6092.42 278066.82 00:26:55.355 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme5n1 : 1.06 240.95 15.06 0.00 0.00 244441.88 19126.80 257872.02 00:26:55.355 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme6n1 : 1.05 243.92 15.24 0.00 0.00 236546.84 18932.62 271853.04 00:26:55.355 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.355 Verification LBA range: start 0x0 length 0x400 00:26:55.355 Nvme7n1 : 1.07 240.04 15.00 0.00 0.00 236557.27 19320.98 279620.27 00:26:55.355 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.356 Verification LBA range: start 0x0 length 0x400 00:26:55.356 Nvme8n1 : 1.06 242.35 15.15 0.00 0.00 229383.40 17961.72 271853.04 00:26:55.356 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.356 Verification LBA range: start 0x0 length 0x400 00:26:55.356 Nvme9n1 : 1.03 186.75 11.67 0.00 0.00 290754.81 22330.79 276513.37 00:26:55.356 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.356 Verification LBA range: start 0x0 length 0x400 00:26:55.356 Nvme10n1 : 1.04 185.47 11.59 0.00 0.00 286713.17 18641.35 290494.39 00:26:55.356 =================================================================================================================== 00:26:55.356 Total : 2252.80 140.80 0.00 0.00 258262.19 6092.42 290494.39 00:26:55.612 04:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 910905 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.543 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.543 rmmod nvme_tcp 00:26:56.800 rmmod nvme_fabrics 00:26:56.800 rmmod nvme_keyring 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 910905 ']' 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 910905 00:26:56.800 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 910905 ']' 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 910905 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 910905 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 910905' 00:26:56.801 killing process with pid 910905 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 910905 00:26:56.801 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 910905 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.367 04:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.273 00:26:59.273 real 0m7.751s 00:26:59.273 user 0m23.679s 00:26:59.273 sys 0m1.507s 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.273 ************************************ 00:26:59.273 END TEST nvmf_shutdown_tc2 00:26:59.273 ************************************ 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.273 ************************************ 00:26:59.273 START TEST nvmf_shutdown_tc3 00:26:59.273 ************************************ 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:59.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:59.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.273 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:59.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:59.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.274 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:26:59.532 00:26:59.532 --- 10.0.0.2 ping statistics --- 00:26:59.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.532 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:26:59.532 00:26:59.532 --- 10.0.0.1 ping statistics --- 00:26:59.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.532 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=912003 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 912003 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 912003 ']' 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.532 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.532 [2024-07-25 04:10:14.691519] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:26:59.532 [2024-07-25 04:10:14.691616] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.532 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.532 [2024-07-25 04:10:14.728989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:59.532 [2024-07-25 04:10:14.755854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.789 [2024-07-25 04:10:14.844008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.789 [2024-07-25 04:10:14.844070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.789 [2024-07-25 04:10:14.844084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.789 [2024-07-25 04:10:14.844094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.789 [2024-07-25 04:10:14.844104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.789 [2024-07-25 04:10:14.844187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.789 [2024-07-25 04:10:14.844258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.789 [2024-07-25 04:10:14.844318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:59.789 [2024-07-25 04:10:14.844321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.789 04:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.789 [2024-07-25 04:10:14.996805] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.789 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.789 Malloc1 00:26:59.790 [2024-07-25 04:10:15.080543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.046 Malloc2 00:27:00.046 Malloc3 00:27:00.046 Malloc4 00:27:00.046 Malloc5 00:27:00.046 Malloc6 00:27:00.305 Malloc7 00:27:00.305 Malloc8 00:27:00.305 Malloc9 00:27:00.305 Malloc10 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=912173 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 912173 /var/tmp/bdevperf.sock 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 912173 ']' 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:00.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.305 { 00:27:00.305 "params": { 00:27:00.305 "name": "Nvme$subsystem", 00:27:00.305 "trtype": "$TEST_TRANSPORT", 00:27:00.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.305 "adrfam": "ipv4", 00:27:00.305 "trsvcid": "$NVMF_PORT", 00:27:00.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.305 "hdgst": ${hdgst:-false}, 00:27:00.305 "ddgst": ${ddgst:-false} 00:27:00.305 }, 00:27:00.305 "method": "bdev_nvme_attach_controller" 00:27:00.305 } 00:27:00.305 EOF 00:27:00.305 )") 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.305 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.306 { 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme$subsystem", 00:27:00.306 "trtype": "$TEST_TRANSPORT", 00:27:00.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "$NVMF_PORT", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.306 "hdgst": ${hdgst:-false}, 00:27:00.306 "ddgst": ${ddgst:-false} 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 } 00:27:00.306 EOF 00:27:00.306 )") 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.306 { 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme$subsystem", 00:27:00.306 "trtype": "$TEST_TRANSPORT", 00:27:00.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "$NVMF_PORT", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.306 "hdgst": ${hdgst:-false}, 00:27:00.306 "ddgst": ${ddgst:-false} 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 } 00:27:00.306 EOF 00:27:00.306 )") 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:00.306 04:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme1", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme2", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme3", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme4", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme5", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme6", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme7", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme8", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme9", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 },{ 00:27:00.306 "params": { 00:27:00.306 "name": "Nvme10", 00:27:00.306 "trtype": "tcp", 00:27:00.306 "traddr": "10.0.0.2", 00:27:00.306 "adrfam": "ipv4", 00:27:00.306 "trsvcid": "4420", 00:27:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:00.306 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:00.306 "hdgst": false, 00:27:00.306 "ddgst": false 00:27:00.306 }, 00:27:00.306 "method": "bdev_nvme_attach_controller" 00:27:00.306 }' 00:27:00.306 [2024-07-25 04:10:15.595088] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:00.306 [2024-07-25 04:10:15.595180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912173 ] 00:27:00.564 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.564 [2024-07-25 04:10:15.631327] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:00.564 [2024-07-25 04:10:15.661515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.564 [2024-07-25 04:10:15.748394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.462 Running I/O for 10 seconds... 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.462 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:02.720 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.720 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:02.720 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:02.720 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:02.978 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 912003 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 912003 ']' 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 912003 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 912003 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 912003' 00:27:03.252 killing process with pid 912003 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 912003 00:27:03.252 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 912003 00:27:03.252 [2024-07-25 04:10:18.388105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.252 [2024-07-25 04:10:18.388630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.388979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.389343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6af0 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.390577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.253 [2024-07-25 04:10:18.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.390641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.253 [2024-07-25 04:10:18.390656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.390670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.253 [2024-07-25 04:10:18.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.390700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.253 [2024-07-25 04:10:18.390714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.390728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cef10 is same with the state(5) to be set 00:27:03.253 [2024-07-25 04:10:18.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-25 04:10:18.392706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.253 [2024-07-25 04:10:18.392727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.392972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.392988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.393976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-25 04:10:18.393993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.254 [2024-07-25 04:10:18.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with [2024-07-25 04:10:18.394358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128the state(5) to be set 00:27:03.255 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.394378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394903] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f14a0 was disconnected and freed. reset controller. 00:27:03.255 [2024-07-25 04:10:18.394915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.394998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-25 04:10:18.395037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with [2024-07-25 04:10:18.395051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.255 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.255 [2024-07-25 04:10:18.395077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-07-25 04:10:18.395090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 the state(5) to be set 00:27:03.255 [2024-07-25 04:10:18.395104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with [2024-07-25 04:10:18.395104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.256 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.395178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7470 is same with the state(5) to be set 00:27:03.256 [2024-07-25 04:10:18.395196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.395979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.395994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.256 [2024-07-25 04:10:18.396273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.256 [2024-07-25 04:10:18.396289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.396747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with [2024-07-25 04:10:18.396861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:12the state(5) to be set 00:27:03.257 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with [2024-07-25 04:10:18.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:12the state(5) to be set 00:27:03.257 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.257 [2024-07-25 04:10:18.396964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.257 [2024-07-25 04:10:18.396979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.257 [2024-07-25 04:10:18.396992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with [2024-07-25 04:10:18.396991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:12the state(5) to be set 00:27:03.257 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.258 [2024-07-25 04:10:18.397007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.258 [2024-07-25 04:10:18.397034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.258 [2024-07-25 04:10:18.397047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.397061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.258 the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.258 [2024-07-25 04:10:18.397086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.258 [2024-07-25 04:10:18.397098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.258 [2024-07-25 04:10:18.397110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.397122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.258 the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.258 [2024-07-25 04:10:18.397148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.258 [2024-07-25 04:10:18.397160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397275] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x147d340 was disconnected and fr[2024-07-25 04:10:18.397283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with eed. reset controller. 00:27:03.258 the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.397624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7950 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:03.258 [2024-07-25 04:10:18.398883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1571ba0 (9): [2024-07-25 04:10:18.398958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with Bad file descriptor 00:27:03.258 the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.258 [2024-07-25 04:10:18.398988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-25 04:10:18.399014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with [2024-07-25 04:10:18.399051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128the state(5) to be set 00:27:03.259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.399106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.399171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with [2024-07-25 04:10:18.399188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12the state(5) to be set 00:27:03.259 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-07-25 04:10:18.399338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with [2024-07-25 04:10:18.399353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.259 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7e10 is same with the state(5) to be set 00:27:03.259 [2024-07-25 04:10:18.399469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.259 [2024-07-25 04:10:18.399814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.259 [2024-07-25 04:10:18.399830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.399848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.399865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.399880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.399896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.399911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.399929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.399943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.399960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.399975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.399991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with [2024-07-25 04:10:18.400764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.260 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-07-25 04:10:18.400783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 04:10:18.400800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:12[2024-07-25 04:10:18.400878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with [2024-07-25 04:10:18.400892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.260 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 [2024-07-25 04:10:18.400949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.260 [2024-07-25 04:10:18.400963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.260 [2024-07-25 04:10:18.400973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:12[2024-07-25 04:10:18.400975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.260 the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.400988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.400989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 [2024-07-25 04:10:18.401013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 [2024-07-25 04:10:18.401047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:12[2024-07-25 04:10:18.401071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with [2024-07-25 04:10:18.401109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12the state(5) to be set 00:27:03.261 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 [2024-07-25 04:10:18.401123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-07-25 04:10:18.401149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with [2024-07-25 04:10:18.401164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:03.261 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 [2024-07-25 04:10:18.401190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-07-25 04:10:18.401214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.261 [2024-07-25 04:10:18.401289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.261 [2024-07-25 04:10:18.401301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with [2024-07-25 04:10:18.401313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7f70 is same the state(5) to be set 00:27:03.261 with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401387] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f7f70 was disconnected and freed. reset controller. 00:27:03.261 [2024-07-25 04:10:18.401393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.401661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b82f0 is same with the state(5) to be set 00:27:03.261 [2024-07-25 04:10:18.403001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:03.261 [2024-07-25 04:10:18.403066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159aad0 (9): Bad file descriptor 00:27:03.261 [2024-07-25 04:10:18.403090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with [2024-07-25 04:10:18.403168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:27:03.262 id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 04:10:18.403202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 04:10:18.403229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 04:10:18.403266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594b00 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 04:10:18.403383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 04:10:18.403430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with [2024-07-25 04:10:18.403544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec4610 is same wthe state(5) to be set 00:27:03.262 ith the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 04:10:18.403607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 04:10:18.403663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with [2024-07-25 04:10:18.403678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:03.262 id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f3380 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with [2024-07-25 04:10:18.403774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cef10 (9): the state(5) to be set 00:27:03.262 Bad file descriptor 00:27:03.262 [2024-07-25 04:10:18.403792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 04:10:18.403828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:03.262 the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.262 [2024-07-25 04:10:18.403855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.262 [2024-07-25 04:10:18.403868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.262 [2024-07-25 04:10:18.403874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.403884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.403898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.403910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.403925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.403938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9890 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.403993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.404004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.404015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.404020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b87b0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.404031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.404046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.404061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.404076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.404090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.263 [2024-07-25 04:10:18.404104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.263 [2024-07-25 04:10:18.404118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d1e0 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.263 [2024-07-25 04:10:18.405901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.405913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.405925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.405937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.405949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8c70 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.264 [2024-07-25 04:10:18.406264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.264 [2024-07-25 04:10:18.406293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1571ba0 with addr=10.0.0.2, port=4420 00:27:03.264 [2024-07-25 04:10:18.406311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1571ba0 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.406976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.264 [2024-07-25 04:10:18.407264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159aad0 with addr=10.0.0.2, port=4420 00:27:03.264 [2024-07-25 04:10:18.407289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159aad0 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with [2024-07-25 04:10:18.407419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.264 the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cef10 with addr=10.0.0.2, port=4420 00:27:03.264 [2024-07-25 04:10:18.407452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cef10 is same [2024-07-25 04:10:18.407465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with with the state(5) to be set 00:27:03.264 the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1571ba0 (9): Bad file descriptor 00:27:03.264 [2024-07-25 04:10:18.407491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.407714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9130 is same with the state(5) to be set 00:27:03.264 [2024-07-25 04:10:18.408268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159aad0 (9): Bad file descriptor 00:27:03.264 [2024-07-25 04:10:18.408300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cef10 (9): Bad file descriptor 00:27:03.264 [2024-07-25 04:10:18.408322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:03.264 [2024-07-25 04:10:18.408336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:03.264 [2024-07-25 04:10:18.408352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:03.264 [2024-07-25 04:10:18.408443] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.264 [2024-07-25 04:10:18.408513] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.264 [2024-07-25 04:10:18.408595] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.265 [2024-07-25 04:10:18.408670] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.265 [2024-07-25 04:10:18.408874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.408899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:03.265 [2024-07-25 04:10:18.408913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:03.265 [2024-07-25 04:10:18.408928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:03.265 [2024-07-25 04:10:18.408950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.265 [2024-07-25 04:10:18.408967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.265 [2024-07-25 04:10:18.408981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.265 [2024-07-25 04:10:18.409111] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.265 [2024-07-25 04:10:18.409280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.409303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.409395] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.265 [2024-07-25 04:10:18.409652] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:03.265 [2024-07-25 04:10:18.413077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527fb0 is same with the state(5) to be set 00:27:03.265 [2024-07-25 04:10:18.413279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.265 [2024-07-25 04:10:18.413424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.413437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ff7c0 is same with the state(5) to be set 00:27:03.265 [2024-07-25 04:10:18.413470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594b00 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.413504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec4610 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.413535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f3380 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.413574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9890 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.413611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158d1e0 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.413795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:03.265 [2024-07-25 04:10:18.414084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.265 [2024-07-25 04:10:18.414113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1571ba0 with addr=10.0.0.2, port=4420 00:27:03.265 [2024-07-25 04:10:18.414129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1571ba0 is same with the state(5) to be set 00:27:03.265 [2024-07-25 04:10:18.414187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1571ba0 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.414253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:03.265 [2024-07-25 04:10:18.414272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:03.265 [2024-07-25 04:10:18.414287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:03.265 [2024-07-25 04:10:18.414344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.416194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.265 [2024-07-25 04:10:18.416409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.265 [2024-07-25 04:10:18.416437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cef10 with addr=10.0.0.2, port=4420 00:27:03.265 [2024-07-25 04:10:18.416454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cef10 is same with the state(5) to be set 00:27:03.265 [2024-07-25 04:10:18.416511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cef10 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.416594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.265 [2024-07-25 04:10:18.416614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.265 [2024-07-25 04:10:18.416639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.265 [2024-07-25 04:10:18.416696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:03.265 [2024-07-25 04:10:18.416718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.416886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.265 [2024-07-25 04:10:18.416912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159aad0 with addr=10.0.0.2, port=4420 00:27:03.265 [2024-07-25 04:10:18.416929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159aad0 is same with the state(5) to be set 00:27:03.265 [2024-07-25 04:10:18.416986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159aad0 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.417043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:03.265 [2024-07-25 04:10:18.417061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:03.265 [2024-07-25 04:10:18.417075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:03.265 [2024-07-25 04:10:18.417131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.265 [2024-07-25 04:10:18.423107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527fb0 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.423190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ff7c0 (9): Bad file descriptor 00:27:03.265 [2024-07-25 04:10:18.423420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.265 [2024-07-25 04:10:18.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.265 [2024-07-25 04:10:18.423788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.423979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.423996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.266 [2024-07-25 04:10:18.424883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.266 [2024-07-25 04:10:18.424897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.424933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.424950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.424965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.424982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.424996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.425577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.425591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147e500 is same with the state(5) to be set 00:27:03.267 [2024-07-25 04:10:18.426929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.426954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.426976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.426992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.267 [2024-07-25 04:10:18.427533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.267 [2024-07-25 04:10:18.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.427974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.427989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.268 [2024-07-25 04:10:18.428849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.268 [2024-07-25 04:10:18.428866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.428881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.428898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.428913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.428929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.428961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.428975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.428992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.429006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.429023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.429037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.429053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147fa40 is same with the state(5) to be set 00:27:03.269 [2024-07-25 04:10:18.430341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.430974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.269 [2024-07-25 04:10:18.431419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.269 [2024-07-25 04:10:18.431437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.431977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.431992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.432430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9490 is same with the state(5) to be set 00:27:03.270 [2024-07-25 04:10:18.433705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.433730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.433761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.433778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.270 [2024-07-25 04:10:18.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.270 [2024-07-25 04:10:18.433809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.433982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.434973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.434991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.435008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.435023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.435040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.435054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.435071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.271 [2024-07-25 04:10:18.435086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.271 [2024-07-25 04:10:18.435102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.435808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.435823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ca960 is same with the state(5) to be set 00:27:03.272 [2024-07-25 04:10:18.437091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.272 [2024-07-25 04:10:18.437658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.272 [2024-07-25 04:10:18.437675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.437976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.437999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.438976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.273 [2024-07-25 04:10:18.438993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.273 [2024-07-25 04:10:18.439007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.439252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.439281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc200 is same with the state(5) to be set 00:27:03.274 [2024-07-25 04:10:18.440528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:03.274 [2024-07-25 04:10:18.440567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:03.274 [2024-07-25 04:10:18.440586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:03.274 [2024-07-25 04:10:18.440604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:03.274 [2024-07-25 04:10:18.440728] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.274 [2024-07-25 04:10:18.440837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:03.274 [2024-07-25 04:10:18.441149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.274 [2024-07-25 04:10:18.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9890 with addr=10.0.0.2, port=4420 00:27:03.274 [2024-07-25 04:10:18.441197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9890 is same with the state(5) to be set 00:27:03.274 [2024-07-25 04:10:18.441333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.274 [2024-07-25 04:10:18.441359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f3380 with addr=10.0.0.2, port=4420 00:27:03.274 [2024-07-25 04:10:18.441376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f3380 is same with the state(5) to be set 00:27:03.274 [2024-07-25 04:10:18.441505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.274 [2024-07-25 04:10:18.441530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158d1e0 with addr=10.0.0.2, port=4420 00:27:03.274 [2024-07-25 04:10:18.441554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d1e0 is same with the state(5) to be set 00:27:03.274 [2024-07-25 04:10:18.441687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.274 [2024-07-25 04:10:18.441721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec4610 with addr=10.0.0.2, port=4420 00:27:03.274 [2024-07-25 04:10:18.441737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec4610 is same with the state(5) to be set 00:27:03.274 [2024-07-25 04:10:18.443096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.274 [2024-07-25 04:10:18.443910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.274 [2024-07-25 04:10:18.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.443942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.443960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.443974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.443991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.444973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.444987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.445003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.445018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.275 [2024-07-25 04:10:18.445035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.275 [2024-07-25 04:10:18.445049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.445251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.445268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3cd0 is same with the state(5) to be set 00:27:03.276 [2024-07-25 04:10:18.446561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.446999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.276 [2024-07-25 04:10:18.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.276 [2024-07-25 04:10:18.447647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.447977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.447994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.277 [2024-07-25 04:10:18.448695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-07-25 04:10:18.448710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14effa0 is same with the state(5) to be set 00:27:03.277 [2024-07-25 04:10:18.451010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:03.277 [2024-07-25 04:10:18.451045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.277 [2024-07-25 04:10:18.451065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:03.277 [2024-07-25 04:10:18.451085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:03.277 task offset: 17664 on job bdev=Nvme10n1 fails 00:27:03.277 00:27:03.277 Latency(us) 00:27:03.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.277 Job: Nvme1n1 ended in about 0.87 seconds with error 00:27:03.277 Verification LBA range: start 0x0 length 0x400 00:27:03.277 Nvme1n1 : 0.87 152.31 9.52 73.29 0.00 280479.05 16699.54 268746.15 00:27:03.277 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.277 Job: Nvme2n1 ended in about 0.87 seconds with error 00:27:03.277 Verification LBA range: start 0x0 length 0x400 00:27:03.277 Nvme2n1 : 0.87 147.04 9.19 73.52 0.00 280670.06 10000.31 298261.62 00:27:03.277 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.277 Job: Nvme3n1 ended in about 0.89 seconds with error 00:27:03.277 Verification LBA range: start 0x0 length 0x400 00:27:03.277 Nvme3n1 : 0.89 143.07 8.94 71.53 0.00 282546.63 25631.86 259425.47 00:27:03.277 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.277 Job: Nvme4n1 ended in about 0.90 seconds with error 00:27:03.277 Verification LBA range: start 0x0 length 0x400 00:27:03.277 Nvme4n1 : 0.90 213.78 13.36 71.26 0.00 208127.43 17864.63 225249.66 00:27:03.277 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.277 Job: Nvme5n1 ended in about 0.90 seconds with error 00:27:03.277 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme5n1 : 0.90 212.98 13.31 70.99 0.00 204366.13 17864.63 254765.13 00:27:03.278 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.278 Job: Nvme6n1 ended in about 0.90 seconds with error 00:27:03.278 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme6n1 : 0.90 141.46 8.84 70.73 0.00 267608.62 22524.97 251658.24 00:27:03.278 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.278 Job: Nvme7n1 ended in about 0.91 seconds with error 00:27:03.278 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme7n1 : 0.91 140.92 8.81 70.46 0.00 262659.79 34564.17 229910.00 00:27:03.278 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.278 Job: Nvme8n1 ended in about 0.91 seconds with error 00:27:03.278 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme8n1 : 0.91 140.00 8.75 70.00 0.00 258743.81 16602.45 257872.02 00:27:03.278 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.278 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:03.278 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme9n1 : 0.92 139.48 8.72 69.74 0.00 254046.81 22039.51 250104.79 00:27:03.278 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.278 Job: Nvme10n1 ended in about 0.87 seconds with error 00:27:03.278 Verification LBA range: start 0x0 length 0x400 00:27:03.278 Nvme10n1 : 0.87 147.67 9.23 73.83 0.00 231286.83 6553.60 318456.41 00:27:03.278 =================================================================================================================== 00:27:03.278 Total : 1578.71 98.67 715.36 0.00 250202.01 6553.60 318456.41 00:27:03.278 [2024-07-25 04:10:18.478178] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:03.278 [2024-07-25 04:10:18.478277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:03.278 [2024-07-25 04:10:18.478618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.478657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594b00 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.478679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594b00 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.478707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9890 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.478731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f3380 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.478751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158d1e0 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.478770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec4610 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.478858] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.478887] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.478917] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.478937] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.478958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594b00 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.479259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.479291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1571ba0 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.479309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1571ba0 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.479430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.479458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cef10 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.479475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cef10 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.479613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.479639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159aad0 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.479656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159aad0 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.479772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.479798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527fb0 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.479816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527fb0 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.479960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.278 [2024-07-25 04:10:18.479986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ff7c0 with addr=10.0.0.2, port=4420 00:27:03.278 [2024-07-25 04:10:18.480014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ff7c0 is same with the state(5) to be set 00:27:03.278 [2024-07-25 04:10:18.480034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.480048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.480064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:03.278 [2024-07-25 04:10:18.480087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.480102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.480116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:03.278 [2024-07-25 04:10:18.480134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.480148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.480163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:03.278 [2024-07-25 04:10:18.480180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.480194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.480207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:03.278 [2024-07-25 04:10:18.480256] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.480282] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.480302] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.480324] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.480345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.278 [2024-07-25 04:10:18.480971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.480997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.481016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.481028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.481045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1571ba0 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.481065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cef10 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.481083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159aad0 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.481101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527fb0 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.481119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ff7c0 (9): Bad file descriptor 00:27:03.278 [2024-07-25 04:10:18.481135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.481540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:03.278 [2024-07-25 04:10:18.481740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:03.278 [2024-07-25 04:10:18.481753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:03.278 [2024-07-25 04:10:18.481807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.278 [2024-07-25 04:10:18.481827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.279 [2024-07-25 04:10:18.481840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.279 [2024-07-25 04:10:18.481852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.279 [2024-07-25 04:10:18.481864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.845 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:03.845 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 912173 00:27:04.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (912173) - No such process 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.780 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.780 rmmod nvme_tcp 00:27:04.780 rmmod nvme_fabrics 00:27:04.780 rmmod nvme_keyring 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.781 04:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.332 00:27:07.332 real 0m7.599s 00:27:07.332 user 0m18.595s 00:27:07.332 sys 0m1.548s 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 ************************************ 00:27:07.332 END TEST nvmf_shutdown_tc3 00:27:07.332 ************************************ 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:07.332 00:27:07.332 real 0m27.333s 00:27:07.332 user 1m16.774s 00:27:07.332 sys 0m6.318s 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 ************************************ 00:27:07.332 END TEST nvmf_shutdown 00:27:07.332 ************************************ 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:07.332 00:27:07.332 real 16m46.669s 00:27:07.332 user 47m14.033s 00:27:07.332 sys 3m52.657s 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.332 04:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 ************************************ 00:27:07.332 END TEST nvmf_target_extra 00:27:07.332 ************************************ 00:27:07.332 04:10:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:07.332 04:10:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:07.332 04:10:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:07.332 04:10:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 ************************************ 00:27:07.332 START TEST nvmf_host 00:27:07.332 ************************************ 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:07.332 * Looking for test storage... 00:27:07.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:07.332 04:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.333 ************************************ 00:27:07.333 START TEST nvmf_multicontroller 00:27:07.333 ************************************ 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:07.333 * Looking for test storage... 00:27:07.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.333 04:10:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.252 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:09.253 00:27:09.253 --- 10.0.0.2 ping statistics --- 00:27:09.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.253 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:09.253 00:27:09.253 --- 10.0.0.1 ping statistics --- 00:27:09.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.253 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=914722 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 914722 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 914722 ']' 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.253 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.253 [2024-07-25 04:10:24.424542] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:09.253 [2024-07-25 04:10:24.424648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.253 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.253 [2024-07-25 04:10:24.462377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:09.253 [2024-07-25 04:10:24.494292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:09.511 [2024-07-25 04:10:24.584786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.511 [2024-07-25 04:10:24.584846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.511 [2024-07-25 04:10:24.584871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.511 [2024-07-25 04:10:24.584884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.511 [2024-07-25 04:10:24.584895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.511 [2024-07-25 04:10:24.585002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.511 [2024-07-25 04:10:24.585099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.511 [2024-07-25 04:10:24.585102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 [2024-07-25 04:10:24.723461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 Malloc0 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 [2024-07-25 04:10:24.793994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.511 [2024-07-25 04:10:24.801862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.511 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.769 Malloc1 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=914754 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 914754 /var/tmp/bdevperf.sock 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 914754 ']' 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.769 04:10:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.026 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.026 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:10.026 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:10.026 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.026 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.283 NVMe0n1 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.283 1 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.283 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.283 request: 00:27:10.283 { 00:27:10.283 "name": "NVMe0", 00:27:10.283 "trtype": "tcp", 00:27:10.283 "traddr": "10.0.0.2", 00:27:10.283 "adrfam": "ipv4", 00:27:10.283 "trsvcid": "4420", 00:27:10.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.283 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:10.283 "hostaddr": "10.0.0.2", 00:27:10.284 "hostsvcid": "60000", 00:27:10.284 "prchk_reftag": false, 00:27:10.284 "prchk_guard": false, 00:27:10.284 "hdgst": false, 00:27:10.284 "ddgst": false, 00:27:10.284 "method": "bdev_nvme_attach_controller", 00:27:10.284 "req_id": 1 00:27:10.284 } 00:27:10.284 Got JSON-RPC error response 00:27:10.284 response: 00:27:10.284 { 00:27:10.284 "code": -114, 00:27:10.284 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:10.284 } 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.284 request: 00:27:10.284 { 00:27:10.284 "name": "NVMe0", 00:27:10.284 "trtype": "tcp", 00:27:10.284 "traddr": "10.0.0.2", 00:27:10.284 "adrfam": "ipv4", 00:27:10.284 "trsvcid": "4420", 00:27:10.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:10.284 "hostaddr": "10.0.0.2", 00:27:10.284 "hostsvcid": "60000", 00:27:10.284 "prchk_reftag": false, 00:27:10.284 "prchk_guard": false, 00:27:10.284 "hdgst": false, 00:27:10.284 "ddgst": false, 00:27:10.284 "method": "bdev_nvme_attach_controller", 00:27:10.284 "req_id": 1 00:27:10.284 } 00:27:10.284 Got JSON-RPC error response 00:27:10.284 response: 00:27:10.284 { 00:27:10.284 "code": -114, 00:27:10.284 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:10.284 } 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.284 request: 00:27:10.284 { 00:27:10.284 "name": "NVMe0", 00:27:10.284 "trtype": "tcp", 00:27:10.284 "traddr": "10.0.0.2", 00:27:10.284 "adrfam": "ipv4", 00:27:10.284 "trsvcid": "4420", 00:27:10.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.284 "hostaddr": "10.0.0.2", 00:27:10.284 "hostsvcid": "60000", 00:27:10.284 "prchk_reftag": false, 00:27:10.284 "prchk_guard": false, 00:27:10.284 "hdgst": false, 00:27:10.284 "ddgst": false, 00:27:10.284 "multipath": "disable", 00:27:10.284 "method": "bdev_nvme_attach_controller", 00:27:10.284 "req_id": 1 00:27:10.284 } 00:27:10.284 Got JSON-RPC error response 00:27:10.284 response: 00:27:10.284 { 00:27:10.284 "code": -114, 00:27:10.284 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:10.284 } 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.284 request: 00:27:10.284 { 00:27:10.284 "name": "NVMe0", 00:27:10.284 "trtype": "tcp", 00:27:10.284 "traddr": "10.0.0.2", 00:27:10.284 "adrfam": "ipv4", 00:27:10.284 "trsvcid": "4420", 00:27:10.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.284 "hostaddr": "10.0.0.2", 00:27:10.284 "hostsvcid": "60000", 00:27:10.284 "prchk_reftag": false, 00:27:10.284 "prchk_guard": false, 00:27:10.284 "hdgst": false, 00:27:10.284 "ddgst": false, 00:27:10.284 "multipath": "failover", 00:27:10.284 "method": "bdev_nvme_attach_controller", 00:27:10.284 "req_id": 1 00:27:10.284 } 00:27:10.284 Got JSON-RPC error response 00:27:10.284 response: 00:27:10.284 { 00:27:10.284 "code": -114, 00:27:10.284 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:10.284 } 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.284 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.541 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.541 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.798 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:10.798 04:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:12.168 0 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 914754 ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 914754' 00:27:12.168 killing process with pid 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 914754 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:12.168 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:12.168 [2024-07-25 04:10:24.905237] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:12.168 [2024-07-25 04:10:24.905356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914754 ] 00:27:12.168 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.168 [2024-07-25 04:10:24.937702] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.168 [2024-07-25 04:10:24.966180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.168 [2024-07-25 04:10:25.050992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.168 [2024-07-25 04:10:25.915927] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 1af00637-79b4-4b56-af94-4466d972babb already exists 00:27:12.168 [2024-07-25 04:10:25.915965] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:1af00637-79b4-4b56-af94-4466d972babb alias for bdev NVMe1n1 00:27:12.168 [2024-07-25 04:10:25.915990] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:12.168 Running I/O for 1 seconds... 00:27:12.168 00:27:12.168 Latency(us) 00:27:12.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.168 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:12.168 NVMe0n1 : 1.01 19355.25 75.61 0.00 0.00 6602.42 4441.88 12524.66 00:27:12.168 =================================================================================================================== 00:27:12.168 Total : 19355.25 75.61 0.00 0.00 6602.42 4441.88 12524.66 00:27:12.168 Received shutdown signal, test time was about 1.000000 seconds 00:27:12.168 00:27:12.168 Latency(us) 00:27:12.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.168 =================================================================================================================== 00:27:12.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.168 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.168 rmmod nvme_tcp 00:27:12.168 rmmod nvme_fabrics 00:27:12.168 rmmod nvme_keyring 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 914722 ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 914722 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 914722 ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 914722 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 914722 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 914722' 00:27:12.168 killing process with pid 914722 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 914722 00:27:12.168 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 914722 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.425 04:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.951 04:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.951 00:27:14.951 real 0m7.489s 00:27:14.951 user 0m12.349s 00:27:14.951 sys 0m2.223s 00:27:14.951 04:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.951 04:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:14.951 ************************************ 00:27:14.951 END TEST nvmf_multicontroller 00:27:14.951 ************************************ 00:27:14.951 04:10:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:14.951 04:10:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.952 ************************************ 00:27:14.952 START TEST nvmf_aer 00:27:14.952 ************************************ 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:14.952 * Looking for test storage... 00:27:14.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.952 04:10:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:27:16.848 00:27:16.848 --- 10.0.0.2 ping statistics --- 00:27:16.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.848 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:16.848 00:27:16.848 --- 10.0.0.1 ping statistics --- 00:27:16.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.848 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:16.848 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:16.849 04:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=916961 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 916961 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 916961 ']' 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.849 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:16.849 [2024-07-25 04:10:32.049307] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:16.849 [2024-07-25 04:10:32.049386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.849 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.849 [2024-07-25 04:10:32.088040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:16.849 [2024-07-25 04:10:32.115609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.106 [2024-07-25 04:10:32.204926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.106 [2024-07-25 04:10:32.204986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.106 [2024-07-25 04:10:32.205013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.106 [2024-07-25 04:10:32.205024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.106 [2024-07-25 04:10:32.205033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.106 [2024-07-25 04:10:32.205135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.106 [2024-07-25 04:10:32.205195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.106 [2024-07-25 04:10:32.205268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.106 [2024-07-25 04:10:32.205271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.106 [2024-07-25 04:10:32.357729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.106 Malloc0 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.106 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.363 [2024-07-25 04:10:32.411350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.363 [ 00:27:17.363 { 00:27:17.363 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.363 "subtype": "Discovery", 00:27:17.363 "listen_addresses": [], 00:27:17.363 "allow_any_host": true, 00:27:17.363 "hosts": [] 00:27:17.363 }, 00:27:17.363 { 00:27:17.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.363 "subtype": "NVMe", 00:27:17.363 "listen_addresses": [ 00:27:17.363 { 00:27:17.363 "trtype": "TCP", 00:27:17.363 "adrfam": "IPv4", 00:27:17.363 "traddr": "10.0.0.2", 00:27:17.363 "trsvcid": "4420" 00:27:17.363 } 00:27:17.363 ], 00:27:17.363 "allow_any_host": true, 00:27:17.363 "hosts": [], 00:27:17.363 "serial_number": "SPDK00000000000001", 00:27:17.363 "model_number": "SPDK bdev Controller", 00:27:17.363 "max_namespaces": 2, 00:27:17.363 "min_cntlid": 1, 00:27:17.363 "max_cntlid": 65519, 00:27:17.363 "namespaces": [ 00:27:17.363 { 00:27:17.363 "nsid": 1, 00:27:17.363 "bdev_name": "Malloc0", 00:27:17.363 "name": "Malloc0", 00:27:17.363 "nguid": "E338FC49CCFE4358BEC0EB14C7D57E51", 00:27:17.363 "uuid": "e338fc49-ccfe-4358-bec0-eb14c7d57e51" 00:27:17.363 } 00:27:17.363 ] 00:27:17.363 } 00:27:17.363 ] 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=917100 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:17.363 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:17.363 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 Malloc1 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 [ 00:27:17.621 { 00:27:17.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.621 "subtype": "Discovery", 00:27:17.621 "listen_addresses": [], 00:27:17.621 "allow_any_host": true, 00:27:17.621 "hosts": [] 00:27:17.621 }, 00:27:17.621 { 00:27:17.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.621 "subtype": "NVMe", 00:27:17.621 "listen_addresses": [ 00:27:17.621 { 00:27:17.621 "trtype": "TCP", 00:27:17.621 "adrfam": "IPv4", 00:27:17.621 "traddr": "10.0.0.2", 00:27:17.621 "trsvcid": "4420" 00:27:17.621 } 00:27:17.621 ], 00:27:17.621 "allow_any_host": true, 00:27:17.621 "hosts": [], 00:27:17.621 "serial_number": "SPDK00000000000001", 00:27:17.621 "model_number": "SPDK bdev Controller", 00:27:17.621 "max_namespaces": 2, 00:27:17.621 "min_cntlid": 1, 00:27:17.621 "max_cntlid": 65519, 00:27:17.621 "namespaces": [ 00:27:17.621 { 00:27:17.621 "nsid": 1, 00:27:17.621 "bdev_name": "Malloc0", 00:27:17.621 "name": "Malloc0", 00:27:17.621 "nguid": "E338FC49CCFE4358BEC0EB14C7D57E51", 00:27:17.621 "uuid": "e338fc49-ccfe-4358-bec0-eb14c7d57e51" 00:27:17.621 }, 00:27:17.621 { 00:27:17.621 "nsid": 2, 00:27:17.621 "bdev_name": "Malloc1", 00:27:17.621 "name": "Malloc1", 00:27:17.621 "nguid": "B5D6AF18FAD94AE8A7EB260A67D4393A", 00:27:17.621 "uuid": "b5d6af18-fad9-4ae8-a7eb-260a67d4393a" 00:27:17.621 } 00:27:17.621 ] 00:27:17.621 } 00:27:17.621 ] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 917100 00:27:17.621 Asynchronous Event Request test 00:27:17.621 Attaching to 10.0.0.2 00:27:17.621 Attached to 10.0.0.2 00:27:17.621 Registering asynchronous event callbacks... 00:27:17.621 Starting namespace attribute notice tests for all controllers... 00:27:17.621 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:17.621 aer_cb - Changed Namespace 00:27:17.621 Cleaning up... 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.621 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.621 rmmod nvme_tcp 00:27:17.621 rmmod nvme_fabrics 00:27:17.621 rmmod nvme_keyring 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 916961 ']' 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 916961 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 916961 ']' 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 916961 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 916961 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 916961' 00:27:17.879 killing process with pid 916961 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 916961 00:27:17.879 04:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 916961 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.137 04:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.033 00:27:20.033 real 0m5.457s 00:27:20.033 user 0m4.590s 00:27:20.033 sys 0m1.925s 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:20.033 ************************************ 00:27:20.033 END TEST nvmf_aer 00:27:20.033 ************************************ 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.033 ************************************ 00:27:20.033 START TEST nvmf_async_init 00:27:20.033 ************************************ 00:27:20.033 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:20.291 * Looking for test storage... 00:27:20.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=10425180b07e4da8bfcac430268e146c 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.291 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.292 04:10:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.190 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:27:22.190 00:27:22.191 --- 10.0.0.2 ping statistics --- 00:27:22.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.191 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:22.191 00:27:22.191 --- 10.0.0.1 ping statistics --- 00:27:22.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.191 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=919038 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 919038 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 919038 ']' 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.191 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.191 [2024-07-25 04:10:37.402972] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:22.191 [2024-07-25 04:10:37.403050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.191 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.191 [2024-07-25 04:10:37.440322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.191 [2024-07-25 04:10:37.468001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.448 [2024-07-25 04:10:37.555269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.448 [2024-07-25 04:10:37.555337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.448 [2024-07-25 04:10:37.555359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.448 [2024-07-25 04:10:37.555370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.448 [2024-07-25 04:10:37.555381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.448 [2024-07-25 04:10:37.555407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.448 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 [2024-07-25 04:10:37.699663] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 null0 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 10425180b07e4da8bfcac430268e146c 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.449 [2024-07-25 04:10:37.739943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.449 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.706 nvme0n1 00:27:22.706 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.706 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:22.706 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.706 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.706 [ 00:27:22.706 { 00:27:22.706 "name": "nvme0n1", 00:27:22.706 "aliases": [ 00:27:22.706 "10425180-b07e-4da8-bfca-c430268e146c" 00:27:22.707 ], 00:27:22.707 "product_name": "NVMe disk", 00:27:22.707 "block_size": 512, 00:27:22.707 "num_blocks": 2097152, 00:27:22.707 "uuid": "10425180-b07e-4da8-bfca-c430268e146c", 00:27:22.707 "assigned_rate_limits": { 00:27:22.707 "rw_ios_per_sec": 0, 00:27:22.707 "rw_mbytes_per_sec": 0, 00:27:22.707 "r_mbytes_per_sec": 0, 00:27:22.707 "w_mbytes_per_sec": 0 00:27:22.707 }, 00:27:22.707 "claimed": false, 00:27:22.707 "zoned": false, 00:27:22.707 "supported_io_types": { 00:27:22.707 "read": true, 00:27:22.707 "write": true, 00:27:22.707 "unmap": false, 00:27:22.707 "flush": true, 00:27:22.707 "reset": true, 00:27:22.707 "nvme_admin": true, 00:27:22.707 "nvme_io": true, 00:27:22.707 "nvme_io_md": false, 00:27:22.707 "write_zeroes": true, 00:27:22.707 "zcopy": false, 00:27:22.707 "get_zone_info": false, 00:27:22.707 "zone_management": false, 00:27:22.707 "zone_append": false, 00:27:22.707 "compare": true, 00:27:22.707 "compare_and_write": true, 00:27:22.707 "abort": true, 00:27:22.707 "seek_hole": false, 00:27:22.707 "seek_data": false, 00:27:22.707 "copy": true, 00:27:22.707 "nvme_iov_md": false 00:27:22.707 }, 00:27:22.707 "memory_domains": [ 00:27:22.707 { 00:27:22.707 "dma_device_id": "system", 00:27:22.707 "dma_device_type": 1 00:27:22.707 } 00:27:22.707 ], 00:27:22.707 "driver_specific": { 00:27:22.707 "nvme": [ 00:27:22.707 { 00:27:22.707 "trid": { 00:27:22.707 "trtype": "TCP", 00:27:22.707 "adrfam": "IPv4", 00:27:22.707 "traddr": "10.0.0.2", 00:27:22.707 "trsvcid": "4420", 00:27:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:22.707 }, 00:27:22.707 "ctrlr_data": { 00:27:22.707 "cntlid": 1, 00:27:22.707 "vendor_id": "0x8086", 00:27:22.707 "model_number": "SPDK bdev Controller", 00:27:22.707 "serial_number": "00000000000000000000", 00:27:22.707 "firmware_revision": "24.09", 00:27:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.707 "oacs": { 00:27:22.707 "security": 0, 00:27:22.707 "format": 0, 00:27:22.707 "firmware": 0, 00:27:22.707 "ns_manage": 0 00:27:22.707 }, 00:27:22.707 "multi_ctrlr": true, 00:27:22.707 "ana_reporting": false 00:27:22.707 }, 00:27:22.707 "vs": { 00:27:22.707 "nvme_version": "1.3" 00:27:22.707 }, 00:27:22.707 "ns_data": { 00:27:22.707 "id": 1, 00:27:22.707 "can_share": true 00:27:22.707 } 00:27:22.707 } 00:27:22.707 ], 00:27:22.707 "mp_policy": "active_passive" 00:27:22.707 } 00:27:22.707 } 00:27:22.707 ] 00:27:22.707 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.707 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:22.707 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.707 04:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.707 [2024-07-25 04:10:37.996720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:22.707 [2024-07-25 04:10:37.996821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1850 (9): Bad file descriptor 00:27:22.967 [2024-07-25 04:10:38.129419] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.967 [ 00:27:22.967 { 00:27:22.967 "name": "nvme0n1", 00:27:22.967 "aliases": [ 00:27:22.967 "10425180-b07e-4da8-bfca-c430268e146c" 00:27:22.967 ], 00:27:22.967 "product_name": "NVMe disk", 00:27:22.967 "block_size": 512, 00:27:22.967 "num_blocks": 2097152, 00:27:22.967 "uuid": "10425180-b07e-4da8-bfca-c430268e146c", 00:27:22.967 "assigned_rate_limits": { 00:27:22.967 "rw_ios_per_sec": 0, 00:27:22.967 "rw_mbytes_per_sec": 0, 00:27:22.967 "r_mbytes_per_sec": 0, 00:27:22.967 "w_mbytes_per_sec": 0 00:27:22.967 }, 00:27:22.967 "claimed": false, 00:27:22.967 "zoned": false, 00:27:22.967 "supported_io_types": { 00:27:22.967 "read": true, 00:27:22.967 "write": true, 00:27:22.967 "unmap": false, 00:27:22.967 "flush": true, 00:27:22.967 "reset": true, 00:27:22.967 "nvme_admin": true, 00:27:22.967 "nvme_io": true, 00:27:22.967 "nvme_io_md": false, 00:27:22.967 "write_zeroes": true, 00:27:22.967 "zcopy": false, 00:27:22.967 "get_zone_info": false, 00:27:22.967 "zone_management": false, 00:27:22.967 "zone_append": false, 00:27:22.967 "compare": true, 00:27:22.967 "compare_and_write": true, 00:27:22.967 "abort": true, 00:27:22.967 "seek_hole": false, 00:27:22.967 "seek_data": false, 00:27:22.967 "copy": true, 00:27:22.967 "nvme_iov_md": false 00:27:22.967 }, 00:27:22.967 "memory_domains": [ 00:27:22.967 { 00:27:22.967 "dma_device_id": "system", 00:27:22.967 "dma_device_type": 1 00:27:22.967 } 00:27:22.967 ], 00:27:22.967 "driver_specific": { 00:27:22.967 "nvme": [ 00:27:22.967 { 00:27:22.967 "trid": { 00:27:22.967 "trtype": "TCP", 00:27:22.967 "adrfam": "IPv4", 00:27:22.967 "traddr": "10.0.0.2", 00:27:22.967 "trsvcid": "4420", 00:27:22.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:22.967 }, 00:27:22.967 "ctrlr_data": { 00:27:22.967 "cntlid": 2, 00:27:22.967 "vendor_id": "0x8086", 00:27:22.967 "model_number": "SPDK bdev Controller", 00:27:22.967 "serial_number": "00000000000000000000", 00:27:22.967 "firmware_revision": "24.09", 00:27:22.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.967 "oacs": { 00:27:22.967 "security": 0, 00:27:22.967 "format": 0, 00:27:22.967 "firmware": 0, 00:27:22.967 "ns_manage": 0 00:27:22.967 }, 00:27:22.967 "multi_ctrlr": true, 00:27:22.967 "ana_reporting": false 00:27:22.967 }, 00:27:22.967 "vs": { 00:27:22.967 "nvme_version": "1.3" 00:27:22.967 }, 00:27:22.967 "ns_data": { 00:27:22.967 "id": 1, 00:27:22.967 "can_share": true 00:27:22.967 } 00:27:22.967 } 00:27:22.967 ], 00:27:22.967 "mp_policy": "active_passive" 00:27:22.967 } 00:27:22.967 } 00:27:22.967 ] 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.d0OvHmY3eI 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.d0OvHmY3eI 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:22.967 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.968 [2024-07-25 04:10:38.189384] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:22.968 [2024-07-25 04:10:38.189577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d0OvHmY3eI 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.968 [2024-07-25 04:10:38.197378] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d0OvHmY3eI 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.968 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:22.968 [2024-07-25 04:10:38.205421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:22.968 [2024-07-25 04:10:38.205491] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:23.226 nvme0n1 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:23.226 [ 00:27:23.226 { 00:27:23.226 "name": "nvme0n1", 00:27:23.226 "aliases": [ 00:27:23.226 "10425180-b07e-4da8-bfca-c430268e146c" 00:27:23.226 ], 00:27:23.226 "product_name": "NVMe disk", 00:27:23.226 "block_size": 512, 00:27:23.226 "num_blocks": 2097152, 00:27:23.226 "uuid": "10425180-b07e-4da8-bfca-c430268e146c", 00:27:23.226 "assigned_rate_limits": { 00:27:23.226 "rw_ios_per_sec": 0, 00:27:23.226 "rw_mbytes_per_sec": 0, 00:27:23.226 "r_mbytes_per_sec": 0, 00:27:23.226 "w_mbytes_per_sec": 0 00:27:23.226 }, 00:27:23.226 "claimed": false, 00:27:23.226 "zoned": false, 00:27:23.226 "supported_io_types": { 00:27:23.226 "read": true, 00:27:23.226 "write": true, 00:27:23.226 "unmap": false, 00:27:23.226 "flush": true, 00:27:23.226 "reset": true, 00:27:23.226 "nvme_admin": true, 00:27:23.226 "nvme_io": true, 00:27:23.226 "nvme_io_md": false, 00:27:23.226 "write_zeroes": true, 00:27:23.226 "zcopy": false, 00:27:23.226 "get_zone_info": false, 00:27:23.226 "zone_management": false, 00:27:23.226 "zone_append": false, 00:27:23.226 "compare": true, 00:27:23.226 "compare_and_write": true, 00:27:23.226 "abort": true, 00:27:23.226 "seek_hole": false, 00:27:23.226 "seek_data": false, 00:27:23.226 "copy": true, 00:27:23.226 "nvme_iov_md": false 00:27:23.226 }, 00:27:23.226 "memory_domains": [ 00:27:23.226 { 00:27:23.226 "dma_device_id": "system", 00:27:23.226 "dma_device_type": 1 00:27:23.226 } 00:27:23.226 ], 00:27:23.226 "driver_specific": { 00:27:23.226 "nvme": [ 00:27:23.226 { 00:27:23.226 "trid": { 00:27:23.226 "trtype": "TCP", 00:27:23.226 "adrfam": "IPv4", 00:27:23.226 "traddr": "10.0.0.2", 00:27:23.226 "trsvcid": "4421", 00:27:23.226 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:23.226 }, 00:27:23.226 "ctrlr_data": { 00:27:23.226 "cntlid": 3, 00:27:23.226 "vendor_id": "0x8086", 00:27:23.226 "model_number": "SPDK bdev Controller", 00:27:23.226 "serial_number": "00000000000000000000", 00:27:23.226 "firmware_revision": "24.09", 00:27:23.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.226 "oacs": { 00:27:23.226 "security": 0, 00:27:23.226 "format": 0, 00:27:23.226 "firmware": 0, 00:27:23.226 "ns_manage": 0 00:27:23.226 }, 00:27:23.226 "multi_ctrlr": true, 00:27:23.226 "ana_reporting": false 00:27:23.226 }, 00:27:23.226 "vs": { 00:27:23.226 "nvme_version": "1.3" 00:27:23.226 }, 00:27:23.226 "ns_data": { 00:27:23.226 "id": 1, 00:27:23.226 "can_share": true 00:27:23.226 } 00:27:23.226 } 00:27:23.226 ], 00:27:23.226 "mp_policy": "active_passive" 00:27:23.226 } 00:27:23.226 } 00:27:23.226 ] 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.d0OvHmY3eI 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.226 rmmod nvme_tcp 00:27:23.226 rmmod nvme_fabrics 00:27:23.226 rmmod nvme_keyring 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 919038 ']' 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 919038 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 919038 ']' 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 919038 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919038 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919038' 00:27:23.226 killing process with pid 919038 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 919038 00:27:23.226 [2024-07-25 04:10:38.410630] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:23.226 [2024-07-25 04:10:38.410667] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:23.226 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 919038 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.485 04:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.385 04:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.385 00:27:25.385 real 0m5.389s 00:27:25.385 user 0m2.032s 00:27:25.385 sys 0m1.738s 00:27:25.385 04:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.385 04:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:25.385 ************************************ 00:27:25.385 END TEST nvmf_async_init 00:27:25.385 ************************************ 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 ************************************ 00:27:25.644 START TEST dma 00:27:25.644 ************************************ 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:25.644 * Looking for test storage... 00:27:25.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:25.644 00:27:25.644 real 0m0.074s 00:27:25.644 user 0m0.034s 00:27:25.644 sys 0m0.045s 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 ************************************ 00:27:25.644 END TEST dma 00:27:25.644 ************************************ 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 ************************************ 00:27:25.644 START TEST nvmf_identify 00:27:25.644 ************************************ 00:27:25.644 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:25.644 * Looking for test storage... 00:27:25.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.645 04:10:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:28.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:28.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.176 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:28.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:28.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.177 04:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:28.177 00:27:28.177 --- 10.0.0.2 ping statistics --- 00:27:28.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.177 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:27:28.177 00:27:28.177 --- 10.0.0.1 ping statistics --- 00:27:28.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.177 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.177 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=921168 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 921168 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 921168 ']' 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 [2024-07-25 04:10:43.081297] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:28.178 [2024-07-25 04:10:43.081388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.178 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.178 [2024-07-25 04:10:43.120918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:28.178 [2024-07-25 04:10:43.148754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.178 [2024-07-25 04:10:43.241686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.178 [2024-07-25 04:10:43.241747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.178 [2024-07-25 04:10:43.241763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.178 [2024-07-25 04:10:43.241777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.178 [2024-07-25 04:10:43.241789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.178 [2024-07-25 04:10:43.241852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.178 [2024-07-25 04:10:43.242199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.178 [2024-07-25 04:10:43.242216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.178 [2024-07-25 04:10:43.242219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 [2024-07-25 04:10:43.359298] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 Malloc0 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 [2024-07-25 04:10:43.430272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.178 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:28.179 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.179 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.179 [ 00:27:28.179 { 00:27:28.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:28.179 "subtype": "Discovery", 00:27:28.179 "listen_addresses": [ 00:27:28.179 { 00:27:28.179 "trtype": "TCP", 00:27:28.179 "adrfam": "IPv4", 00:27:28.179 "traddr": "10.0.0.2", 00:27:28.179 "trsvcid": "4420" 00:27:28.179 } 00:27:28.179 ], 00:27:28.179 "allow_any_host": true, 00:27:28.179 "hosts": [] 00:27:28.179 }, 00:27:28.179 { 00:27:28.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.179 "subtype": "NVMe", 00:27:28.179 "listen_addresses": [ 00:27:28.179 { 00:27:28.179 "trtype": "TCP", 00:27:28.179 "adrfam": "IPv4", 00:27:28.179 "traddr": "10.0.0.2", 00:27:28.179 "trsvcid": "4420" 00:27:28.179 } 00:27:28.179 ], 00:27:28.179 "allow_any_host": true, 00:27:28.179 "hosts": [], 00:27:28.179 "serial_number": "SPDK00000000000001", 00:27:28.179 "model_number": "SPDK bdev Controller", 00:27:28.179 "max_namespaces": 32, 00:27:28.179 "min_cntlid": 1, 00:27:28.179 "max_cntlid": 65519, 00:27:28.179 "namespaces": [ 00:27:28.179 { 00:27:28.179 "nsid": 1, 00:27:28.179 "bdev_name": "Malloc0", 00:27:28.179 "name": "Malloc0", 00:27:28.179 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:28.179 "eui64": "ABCDEF0123456789", 00:27:28.179 "uuid": "2747dc61-16bc-46c1-ad11-67983c6471a4" 00:27:28.179 } 00:27:28.179 ] 00:27:28.179 } 00:27:28.179 ] 00:27:28.179 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.179 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:28.179 [2024-07-25 04:10:43.468429] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:28.179 [2024-07-25 04:10:43.468469] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921193 ] 00:27:28.441 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.441 [2024-07-25 04:10:43.484927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:28.442 [2024-07-25 04:10:43.502712] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:28.442 [2024-07-25 04:10:43.502766] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:28.442 [2024-07-25 04:10:43.502776] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:28.442 [2024-07-25 04:10:43.502801] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:28.442 [2024-07-25 04:10:43.502813] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:28.442 [2024-07-25 04:10:43.503130] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:28.442 [2024-07-25 04:10:43.503176] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1749630 0 00:27:28.442 [2024-07-25 04:10:43.516268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:28.442 [2024-07-25 04:10:43.516294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:28.442 [2024-07-25 04:10:43.516304] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:28.442 [2024-07-25 04:10:43.516310] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:28.442 [2024-07-25 04:10:43.516361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.516373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.516381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.516399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:28.442 [2024-07-25 04:10:43.516425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.523266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.523300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.523307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.523331] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:28.442 [2024-07-25 04:10:43.523342] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:28.442 [2024-07-25 04:10:43.523351] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:28.442 [2024-07-25 04:10:43.523372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.523400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.523424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.523609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.523625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.523632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.523652] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:28.442 [2024-07-25 04:10:43.523666] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:28.442 [2024-07-25 04:10:43.523678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.523708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.523731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.523886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.523901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.523908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.523924] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:28.442 [2024-07-25 04:10:43.523938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.523950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.523964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.523975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.523997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.524128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.524143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.524150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.524166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.524182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.524209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.524247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.524361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.524376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.524383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.524399] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:28.442 [2024-07-25 04:10:43.524407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.524420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.524530] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:28.442 [2024-07-25 04:10:43.524538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.524580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.524607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.524643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.524845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.524861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.524869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.524884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:28.442 [2024-07-25 04:10:43.524900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.524916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.524927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.524948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.525078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.442 [2024-07-25 04:10:43.525093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.442 [2024-07-25 04:10:43.525100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.525106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.442 [2024-07-25 04:10:43.525114] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:28.442 [2024-07-25 04:10:43.525123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:28.442 [2024-07-25 04:10:43.525136] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:28.442 [2024-07-25 04:10:43.525156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:28.442 [2024-07-25 04:10:43.525171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.442 [2024-07-25 04:10:43.525179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.442 [2024-07-25 04:10:43.525190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.442 [2024-07-25 04:10:43.525211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.442 [2024-07-25 04:10:43.525421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.443 [2024-07-25 04:10:43.525435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.443 [2024-07-25 04:10:43.525442] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525449] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1749630): datao=0, datal=4096, cccid=0 00:27:28.443 [2024-07-25 04:10:43.525457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1797f80) on tqpair(0x1749630): expected_datao=0, payload_size=4096 00:27:28.443 [2024-07-25 04:10:43.525466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525481] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525490] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.525520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.443 [2024-07-25 04:10:43.525527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.443 [2024-07-25 04:10:43.525550] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:28.443 [2024-07-25 04:10:43.525559] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:28.443 [2024-07-25 04:10:43.525567] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:28.443 [2024-07-25 04:10:43.525576] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:28.443 [2024-07-25 04:10:43.525584] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:28.443 [2024-07-25 04:10:43.525592] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:28.443 [2024-07-25 04:10:43.525606] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:28.443 [2024-07-25 04:10:43.525622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.525648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:28.443 [2024-07-25 04:10:43.525670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.443 [2024-07-25 04:10:43.525816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.525831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.443 [2024-07-25 04:10:43.525837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.443 [2024-07-25 04:10:43.525856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.525880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.443 [2024-07-25 04:10:43.525889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.525911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.443 [2024-07-25 04:10:43.525921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.525943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.443 [2024-07-25 04:10:43.525957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.525985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.525994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.443 [2024-07-25 04:10:43.526003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:28.443 [2024-07-25 04:10:43.526022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:28.443 [2024-07-25 04:10:43.526034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.526051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.443 [2024-07-25 04:10:43.526087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1797f80, cid 0, qid 0 00:27:28.443 [2024-07-25 04:10:43.526099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798100, cid 1, qid 0 00:27:28.443 [2024-07-25 04:10:43.526107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798280, cid 2, qid 0 00:27:28.443 [2024-07-25 04:10:43.526114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798400, cid 3, qid 0 00:27:28.443 [2024-07-25 04:10:43.526121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798580, cid 4, qid 0 00:27:28.443 [2024-07-25 04:10:43.526378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.526394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.443 [2024-07-25 04:10:43.526400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798580) on tqpair=0x1749630 00:27:28.443 [2024-07-25 04:10:43.526416] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:28.443 [2024-07-25 04:10:43.526425] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:28.443 [2024-07-25 04:10:43.526442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.526462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.443 [2024-07-25 04:10:43.526483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798580, cid 4, qid 0 00:27:28.443 [2024-07-25 04:10:43.526649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.443 [2024-07-25 04:10:43.526664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.443 [2024-07-25 04:10:43.526671] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526678] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1749630): datao=0, datal=4096, cccid=4 00:27:28.443 [2024-07-25 04:10:43.526685] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1798580) on tqpair(0x1749630): expected_datao=0, payload_size=4096 00:27:28.443 [2024-07-25 04:10:43.526693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526728] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526737] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.526830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.443 [2024-07-25 04:10:43.526838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798580) on tqpair=0x1749630 00:27:28.443 [2024-07-25 04:10:43.526874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:28.443 [2024-07-25 04:10:43.526909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.526930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.443 [2024-07-25 04:10:43.526942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.526956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1749630) 00:27:28.443 [2024-07-25 04:10:43.526965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.443 [2024-07-25 04:10:43.526991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798580, cid 4, qid 0 00:27:28.443 [2024-07-25 04:10:43.527003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798700, cid 5, qid 0 00:27:28.443 [2024-07-25 04:10:43.527208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.443 [2024-07-25 04:10:43.527221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.443 [2024-07-25 04:10:43.527228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.527235] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1749630): datao=0, datal=1024, cccid=4 00:27:28.443 [2024-07-25 04:10:43.531254] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1798580) on tqpair(0x1749630): expected_datao=0, payload_size=1024 00:27:28.443 [2024-07-25 04:10:43.531266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.531277] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.531284] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.531292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.531301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.443 [2024-07-25 04:10:43.531308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.443 [2024-07-25 04:10:43.531315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798700) on tqpair=0x1749630 00:27:28.443 [2024-07-25 04:10:43.568363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.443 [2024-07-25 04:10:43.568384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.444 [2024-07-25 04:10:43.568392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798580) on tqpair=0x1749630 00:27:28.444 [2024-07-25 04:10:43.568417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1749630) 00:27:28.444 [2024-07-25 04:10:43.568439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.444 [2024-07-25 04:10:43.568470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798580, cid 4, qid 0 00:27:28.444 [2024-07-25 04:10:43.568614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.444 [2024-07-25 04:10:43.568630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.444 [2024-07-25 04:10:43.568637] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568650] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1749630): datao=0, datal=3072, cccid=4 00:27:28.444 [2024-07-25 04:10:43.568659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1798580) on tqpair(0x1749630): expected_datao=0, payload_size=3072 00:27:28.444 [2024-07-25 04:10:43.568667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568678] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568686] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.444 [2024-07-25 04:10:43.568719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.444 [2024-07-25 04:10:43.568726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798580) on tqpair=0x1749630 00:27:28.444 [2024-07-25 04:10:43.568748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1749630) 00:27:28.444 [2024-07-25 04:10:43.568767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.444 [2024-07-25 04:10:43.568796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798580, cid 4, qid 0 00:27:28.444 [2024-07-25 04:10:43.568940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.444 [2024-07-25 04:10:43.568956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.444 [2024-07-25 04:10:43.568963] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568969] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1749630): datao=0, datal=8, cccid=4 00:27:28.444 [2024-07-25 04:10:43.568977] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1798580) on tqpair(0x1749630): expected_datao=0, payload_size=8 00:27:28.444 [2024-07-25 04:10:43.568985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.568994] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.569002] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.610422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.444 [2024-07-25 04:10:43.610443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.444 [2024-07-25 04:10:43.610452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.444 [2024-07-25 04:10:43.610459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798580) on tqpair=0x1749630 00:27:28.444 ===================================================== 00:27:28.444 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:28.444 ===================================================== 00:27:28.444 Controller Capabilities/Features 00:27:28.444 ================================ 00:27:28.444 Vendor ID: 0000 00:27:28.444 Subsystem Vendor ID: 0000 00:27:28.444 Serial Number: .................... 00:27:28.444 Model Number: ........................................ 00:27:28.444 Firmware Version: 24.09 00:27:28.444 Recommended Arb Burst: 0 00:27:28.444 IEEE OUI Identifier: 00 00 00 00:27:28.444 Multi-path I/O 00:27:28.444 May have multiple subsystem ports: No 00:27:28.444 May have multiple controllers: No 00:27:28.444 Associated with SR-IOV VF: No 00:27:28.444 Max Data Transfer Size: 131072 00:27:28.444 Max Number of Namespaces: 0 00:27:28.444 Max Number of I/O Queues: 1024 00:27:28.444 NVMe Specification Version (VS): 1.3 00:27:28.444 NVMe Specification Version (Identify): 1.3 00:27:28.444 Maximum Queue Entries: 128 00:27:28.444 Contiguous Queues Required: Yes 00:27:28.444 Arbitration Mechanisms Supported 00:27:28.444 Weighted Round Robin: Not Supported 00:27:28.444 Vendor Specific: Not Supported 00:27:28.444 Reset Timeout: 15000 ms 00:27:28.444 Doorbell Stride: 4 bytes 00:27:28.444 NVM Subsystem Reset: Not Supported 00:27:28.444 Command Sets Supported 00:27:28.444 NVM Command Set: Supported 00:27:28.444 Boot Partition: Not Supported 00:27:28.444 Memory Page Size Minimum: 4096 bytes 00:27:28.444 Memory Page Size Maximum: 4096 bytes 00:27:28.444 Persistent Memory Region: Not Supported 00:27:28.444 Optional Asynchronous Events Supported 00:27:28.444 Namespace Attribute Notices: Not Supported 00:27:28.444 Firmware Activation Notices: Not Supported 00:27:28.444 ANA Change Notices: Not Supported 00:27:28.444 PLE Aggregate Log Change Notices: Not Supported 00:27:28.444 LBA Status Info Alert Notices: Not Supported 00:27:28.444 EGE Aggregate Log Change Notices: Not Supported 00:27:28.444 Normal NVM Subsystem Shutdown event: Not Supported 00:27:28.444 Zone Descriptor Change Notices: Not Supported 00:27:28.444 Discovery Log Change Notices: Supported 00:27:28.444 Controller Attributes 00:27:28.444 128-bit Host Identifier: Not Supported 00:27:28.444 Non-Operational Permissive Mode: Not Supported 00:27:28.444 NVM Sets: Not Supported 00:27:28.444 Read Recovery Levels: Not Supported 00:27:28.444 Endurance Groups: Not Supported 00:27:28.444 Predictable Latency Mode: Not Supported 00:27:28.444 Traffic Based Keep ALive: Not Supported 00:27:28.444 Namespace Granularity: Not Supported 00:27:28.444 SQ Associations: Not Supported 00:27:28.444 UUID List: Not Supported 00:27:28.444 Multi-Domain Subsystem: Not Supported 00:27:28.444 Fixed Capacity Management: Not Supported 00:27:28.444 Variable Capacity Management: Not Supported 00:27:28.444 Delete Endurance Group: Not Supported 00:27:28.444 Delete NVM Set: Not Supported 00:27:28.444 Extended LBA Formats Supported: Not Supported 00:27:28.444 Flexible Data Placement Supported: Not Supported 00:27:28.444 00:27:28.444 Controller Memory Buffer Support 00:27:28.444 ================================ 00:27:28.444 Supported: No 00:27:28.444 00:27:28.444 Persistent Memory Region Support 00:27:28.444 ================================ 00:27:28.444 Supported: No 00:27:28.444 00:27:28.444 Admin Command Set Attributes 00:27:28.444 ============================ 00:27:28.444 Security Send/Receive: Not Supported 00:27:28.444 Format NVM: Not Supported 00:27:28.444 Firmware Activate/Download: Not Supported 00:27:28.444 Namespace Management: Not Supported 00:27:28.444 Device Self-Test: Not Supported 00:27:28.444 Directives: Not Supported 00:27:28.444 NVMe-MI: Not Supported 00:27:28.444 Virtualization Management: Not Supported 00:27:28.444 Doorbell Buffer Config: Not Supported 00:27:28.444 Get LBA Status Capability: Not Supported 00:27:28.444 Command & Feature Lockdown Capability: Not Supported 00:27:28.444 Abort Command Limit: 1 00:27:28.444 Async Event Request Limit: 4 00:27:28.444 Number of Firmware Slots: N/A 00:27:28.444 Firmware Slot 1 Read-Only: N/A 00:27:28.444 Firmware Activation Without Reset: N/A 00:27:28.444 Multiple Update Detection Support: N/A 00:27:28.444 Firmware Update Granularity: No Information Provided 00:27:28.444 Per-Namespace SMART Log: No 00:27:28.444 Asymmetric Namespace Access Log Page: Not Supported 00:27:28.444 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:28.444 Command Effects Log Page: Not Supported 00:27:28.444 Get Log Page Extended Data: Supported 00:27:28.444 Telemetry Log Pages: Not Supported 00:27:28.444 Persistent Event Log Pages: Not Supported 00:27:28.444 Supported Log Pages Log Page: May Support 00:27:28.444 Commands Supported & Effects Log Page: Not Supported 00:27:28.444 Feature Identifiers & Effects Log Page:May Support 00:27:28.444 NVMe-MI Commands & Effects Log Page: May Support 00:27:28.444 Data Area 4 for Telemetry Log: Not Supported 00:27:28.444 Error Log Page Entries Supported: 128 00:27:28.444 Keep Alive: Not Supported 00:27:28.444 00:27:28.444 NVM Command Set Attributes 00:27:28.444 ========================== 00:27:28.444 Submission Queue Entry Size 00:27:28.444 Max: 1 00:27:28.444 Min: 1 00:27:28.444 Completion Queue Entry Size 00:27:28.444 Max: 1 00:27:28.444 Min: 1 00:27:28.444 Number of Namespaces: 0 00:27:28.444 Compare Command: Not Supported 00:27:28.444 Write Uncorrectable Command: Not Supported 00:27:28.444 Dataset Management Command: Not Supported 00:27:28.444 Write Zeroes Command: Not Supported 00:27:28.444 Set Features Save Field: Not Supported 00:27:28.444 Reservations: Not Supported 00:27:28.444 Timestamp: Not Supported 00:27:28.444 Copy: Not Supported 00:27:28.444 Volatile Write Cache: Not Present 00:27:28.445 Atomic Write Unit (Normal): 1 00:27:28.445 Atomic Write Unit (PFail): 1 00:27:28.445 Atomic Compare & Write Unit: 1 00:27:28.445 Fused Compare & Write: Supported 00:27:28.445 Scatter-Gather List 00:27:28.445 SGL Command Set: Supported 00:27:28.445 SGL Keyed: Supported 00:27:28.445 SGL Bit Bucket Descriptor: Not Supported 00:27:28.445 SGL Metadata Pointer: Not Supported 00:27:28.445 Oversized SGL: Not Supported 00:27:28.445 SGL Metadata Address: Not Supported 00:27:28.445 SGL Offset: Supported 00:27:28.445 Transport SGL Data Block: Not Supported 00:27:28.445 Replay Protected Memory Block: Not Supported 00:27:28.445 00:27:28.445 Firmware Slot Information 00:27:28.445 ========================= 00:27:28.445 Active slot: 0 00:27:28.445 00:27:28.445 00:27:28.445 Error Log 00:27:28.445 ========= 00:27:28.445 00:27:28.445 Active Namespaces 00:27:28.445 ================= 00:27:28.445 Discovery Log Page 00:27:28.445 ================== 00:27:28.445 Generation Counter: 2 00:27:28.445 Number of Records: 2 00:27:28.445 Record Format: 0 00:27:28.445 00:27:28.445 Discovery Log Entry 0 00:27:28.445 ---------------------- 00:27:28.445 Transport Type: 3 (TCP) 00:27:28.445 Address Family: 1 (IPv4) 00:27:28.445 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:28.445 Entry Flags: 00:27:28.445 Duplicate Returned Information: 1 00:27:28.445 Explicit Persistent Connection Support for Discovery: 1 00:27:28.445 Transport Requirements: 00:27:28.445 Secure Channel: Not Required 00:27:28.445 Port ID: 0 (0x0000) 00:27:28.445 Controller ID: 65535 (0xffff) 00:27:28.445 Admin Max SQ Size: 128 00:27:28.445 Transport Service Identifier: 4420 00:27:28.445 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:28.445 Transport Address: 10.0.0.2 00:27:28.445 Discovery Log Entry 1 00:27:28.445 ---------------------- 00:27:28.445 Transport Type: 3 (TCP) 00:27:28.445 Address Family: 1 (IPv4) 00:27:28.445 Subsystem Type: 2 (NVM Subsystem) 00:27:28.445 Entry Flags: 00:27:28.445 Duplicate Returned Information: 0 00:27:28.445 Explicit Persistent Connection Support for Discovery: 0 00:27:28.445 Transport Requirements: 00:27:28.445 Secure Channel: Not Required 00:27:28.445 Port ID: 0 (0x0000) 00:27:28.445 Controller ID: 65535 (0xffff) 00:27:28.445 Admin Max SQ Size: 128 00:27:28.445 Transport Service Identifier: 4420 00:27:28.445 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:28.445 Transport Address: 10.0.0.2 [2024-07-25 04:10:43.610574] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:28.445 [2024-07-25 04:10:43.610596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1797f80) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.610613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-25 04:10:43.610622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798100) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.610630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-25 04:10:43.610638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798280) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-25 04:10:43.610654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798400) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.610662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-25 04:10:43.610680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.610709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.610717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1749630) 00:27:28.445 [2024-07-25 04:10:43.610728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-25 04:10:43.610754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798400, cid 3, qid 0 00:27:28.445 [2024-07-25 04:10:43.610923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.445 [2024-07-25 04:10:43.610936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.445 [2024-07-25 04:10:43.610943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.610950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798400) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.610962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.610970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.610976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1749630) 00:27:28.445 [2024-07-25 04:10:43.610987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-25 04:10:43.611014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798400, cid 3, qid 0 00:27:28.445 [2024-07-25 04:10:43.611151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.445 [2024-07-25 04:10:43.611166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.445 [2024-07-25 04:10:43.611173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.611180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798400) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.611189] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:28.445 [2024-07-25 04:10:43.611197] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:28.445 [2024-07-25 04:10:43.611213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.611222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.611229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1749630) 00:27:28.445 [2024-07-25 04:10:43.611239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-25 04:10:43.615277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1798400, cid 3, qid 0 00:27:28.445 [2024-07-25 04:10:43.615453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.445 [2024-07-25 04:10:43.615466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.445 [2024-07-25 04:10:43.615473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.615480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1798400) on tqpair=0x1749630 00:27:28.445 [2024-07-25 04:10:43.615494] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:27:28.445 00:27:28.445 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:28.445 [2024-07-25 04:10:43.647194] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:28.445 [2024-07-25 04:10:43.647274] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921324 ] 00:27:28.445 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.445 [2024-07-25 04:10:43.663534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:28.445 [2024-07-25 04:10:43.680946] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:28.445 [2024-07-25 04:10:43.680993] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:28.445 [2024-07-25 04:10:43.681002] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:28.445 [2024-07-25 04:10:43.681015] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:28.445 [2024-07-25 04:10:43.681026] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:28.445 [2024-07-25 04:10:43.681223] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:28.445 [2024-07-25 04:10:43.681267] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd87630 0 00:27:28.445 [2024-07-25 04:10:43.695250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:28.445 [2024-07-25 04:10:43.695273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:28.445 [2024-07-25 04:10:43.695282] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:28.445 [2024-07-25 04:10:43.695296] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:28.445 [2024-07-25 04:10:43.695335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.695346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.695352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.445 [2024-07-25 04:10:43.695366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:28.445 [2024-07-25 04:10:43.695392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.445 [2024-07-25 04:10:43.703271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.445 [2024-07-25 04:10:43.703292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.445 [2024-07-25 04:10:43.703299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.445 [2024-07-25 04:10:43.703307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.445 [2024-07-25 04:10:43.703324] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:28.446 [2024-07-25 04:10:43.703334] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:28.446 [2024-07-25 04:10:43.703344] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:28.446 [2024-07-25 04:10:43.703361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.703387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.703409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.703568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.703583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.703590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.703613] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:28.446 [2024-07-25 04:10:43.703628] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:28.446 [2024-07-25 04:10:43.703640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.703665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.703687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.703809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.703821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.703828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.703843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:28.446 [2024-07-25 04:10:43.703857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.703869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.703883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.703894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.703914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.704022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.704037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.704044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.704059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.704075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.704101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.704122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.704237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.704261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.704269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.704283] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:28.446 [2024-07-25 04:10:43.704291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.704308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.704419] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:28.446 [2024-07-25 04:10:43.704426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.704438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.704477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.704499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.704667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.704682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.704689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.704704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:28.446 [2024-07-25 04:10:43.704721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.446 [2024-07-25 04:10:43.704747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-25 04:10:43.704768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.446 [2024-07-25 04:10:43.704884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.446 [2024-07-25 04:10:43.704896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.446 [2024-07-25 04:10:43.704903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.446 [2024-07-25 04:10:43.704917] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:28.446 [2024-07-25 04:10:43.704925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:28.446 [2024-07-25 04:10:43.704938] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:28.446 [2024-07-25 04:10:43.704952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:28.446 [2024-07-25 04:10:43.704965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.446 [2024-07-25 04:10:43.704973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.704983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-25 04:10:43.705004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.447 [2024-07-25 04:10:43.705162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.447 [2024-07-25 04:10:43.705175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.447 [2024-07-25 04:10:43.705182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705191] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=4096, cccid=0 00:27:28.447 [2024-07-25 04:10:43.705200] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd5f80) on tqpair(0xd87630): expected_datao=0, payload_size=4096 00:27:28.447 [2024-07-25 04:10:43.705207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705226] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.447 [2024-07-25 04:10:43.705266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.447 [2024-07-25 04:10:43.705273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.447 [2024-07-25 04:10:43.705290] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:28.447 [2024-07-25 04:10:43.705298] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:28.447 [2024-07-25 04:10:43.705306] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:28.447 [2024-07-25 04:10:43.705312] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:28.447 [2024-07-25 04:10:43.705320] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:28.447 [2024-07-25 04:10:43.705328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.705342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.705358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:28.447 [2024-07-25 04:10:43.705405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.447 [2024-07-25 04:10:43.705525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.447 [2024-07-25 04:10:43.705540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.447 [2024-07-25 04:10:43.705547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.447 [2024-07-25 04:10:43.705564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.447 [2024-07-25 04:10:43.705597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.447 [2024-07-25 04:10:43.705628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.447 [2024-07-25 04:10:43.705664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.447 [2024-07-25 04:10:43.705711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.705729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.705741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.705748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.705758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-25 04:10:43.705794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd5f80, cid 0, qid 0 00:27:28.447 [2024-07-25 04:10:43.705805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6100, cid 1, qid 0 00:27:28.447 [2024-07-25 04:10:43.705813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6280, cid 2, qid 0 00:27:28.447 [2024-07-25 04:10:43.705820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6400, cid 3, qid 0 00:27:28.447 [2024-07-25 04:10:43.705828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.447 [2024-07-25 04:10:43.706055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.447 [2024-07-25 04:10:43.706068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.447 [2024-07-25 04:10:43.706074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.447 [2024-07-25 04:10:43.706089] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:28.447 [2024-07-25 04:10:43.706098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.706178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:28.447 [2024-07-25 04:10:43.706199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.447 [2024-07-25 04:10:43.706368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.447 [2024-07-25 04:10:43.706384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.447 [2024-07-25 04:10:43.706394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.447 [2024-07-25 04:10:43.706469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.706520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-25 04:10:43.706541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.447 [2024-07-25 04:10:43.706767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.447 [2024-07-25 04:10:43.706783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.447 [2024-07-25 04:10:43.706789] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706796] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=4096, cccid=4 00:27:28.447 [2024-07-25 04:10:43.706804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6580) on tqpair(0xd87630): expected_datao=0, payload_size=4096 00:27:28.447 [2024-07-25 04:10:43.706811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706832] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706841] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.447 [2024-07-25 04:10:43.706929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.447 [2024-07-25 04:10:43.706936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.706943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.447 [2024-07-25 04:10:43.706957] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:28.447 [2024-07-25 04:10:43.706978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.706995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:28.447 [2024-07-25 04:10:43.707009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.447 [2024-07-25 04:10:43.707016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.447 [2024-07-25 04:10:43.707027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-25 04:10:43.707048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.448 [2024-07-25 04:10:43.707180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.448 [2024-07-25 04:10:43.707195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.448 [2024-07-25 04:10:43.707202] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.707209] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=4096, cccid=4 00:27:28.448 [2024-07-25 04:10:43.707216] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6580) on tqpair(0xd87630): expected_datao=0, payload_size=4096 00:27:28.448 [2024-07-25 04:10:43.707224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711251] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711265] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.711294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.711301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.711327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.711378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.711399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.448 [2024-07-25 04:10:43.711584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.448 [2024-07-25 04:10:43.711596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.448 [2024-07-25 04:10:43.711603] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711610] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=4096, cccid=4 00:27:28.448 [2024-07-25 04:10:43.711617] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6580) on tqpair(0xd87630): expected_datao=0, payload_size=4096 00:27:28.448 [2024-07-25 04:10:43.711625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711641] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711650] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.711730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.711737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.711756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711825] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:28.448 [2024-07-25 04:10:43.711832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:28.448 [2024-07-25 04:10:43.711841] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:28.448 [2024-07-25 04:10:43.711862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.711883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.711893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.711907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.711916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.448 [2024-07-25 04:10:43.711958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.448 [2024-07-25 04:10:43.711970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6700, cid 5, qid 0 00:27:28.448 [2024-07-25 04:10:43.712157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.712170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.712177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.712194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.712203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.712209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6700) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.712231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6700, cid 5, qid 0 00:27:28.448 [2024-07-25 04:10:43.712436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.712448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.712455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6700) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.712477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6700, cid 5, qid 0 00:27:28.448 [2024-07-25 04:10:43.712633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.712645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.712652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6700) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.712674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6700, cid 5, qid 0 00:27:28.448 [2024-07-25 04:10:43.712836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.448 [2024-07-25 04:10:43.712847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.448 [2024-07-25 04:10:43.712854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6700) on tqpair=0xd87630 00:27:28.448 [2024-07-25 04:10:43.712883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.712974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.448 [2024-07-25 04:10:43.712981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd87630) 00:27:28.448 [2024-07-25 04:10:43.712991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-25 04:10:43.713027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6700, cid 5, qid 0 00:27:28.448 [2024-07-25 04:10:43.713038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6580, cid 4, qid 0 00:27:28.448 [2024-07-25 04:10:43.713046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6880, cid 6, qid 0 00:27:28.448 [2024-07-25 04:10:43.713054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6a00, cid 7, qid 0 00:27:28.448 [2024-07-25 04:10:43.713331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.448 [2024-07-25 04:10:43.713345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.448 [2024-07-25 04:10:43.713352] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713359] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=8192, cccid=5 00:27:28.449 [2024-07-25 04:10:43.713366] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6700) on tqpair(0xd87630): expected_datao=0, payload_size=8192 00:27:28.449 [2024-07-25 04:10:43.713374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713436] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713446] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.449 [2024-07-25 04:10:43.713464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.449 [2024-07-25 04:10:43.713471] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713477] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=512, cccid=4 00:27:28.449 [2024-07-25 04:10:43.713485] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6580) on tqpair(0xd87630): expected_datao=0, payload_size=512 00:27:28.449 [2024-07-25 04:10:43.713496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713514] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.449 [2024-07-25 04:10:43.713531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.449 [2024-07-25 04:10:43.713538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713544] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=512, cccid=6 00:27:28.449 [2024-07-25 04:10:43.713552] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6880) on tqpair(0xd87630): expected_datao=0, payload_size=512 00:27:28.449 [2024-07-25 04:10:43.713559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713569] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713576] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:28.449 [2024-07-25 04:10:43.713593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:28.449 [2024-07-25 04:10:43.713600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713606] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87630): datao=0, datal=4096, cccid=7 00:27:28.449 [2024-07-25 04:10:43.713614] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd6a00) on tqpair(0xd87630): expected_datao=0, payload_size=4096 00:27:28.449 [2024-07-25 04:10:43.713621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713630] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713638] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.449 [2024-07-25 04:10:43.713659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.449 [2024-07-25 04:10:43.713666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6700) on tqpair=0xd87630 00:27:28.449 [2024-07-25 04:10:43.713690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.449 [2024-07-25 04:10:43.713701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.449 [2024-07-25 04:10:43.713708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6580) on tqpair=0xd87630 00:27:28.449 [2024-07-25 04:10:43.713745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.449 [2024-07-25 04:10:43.713756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.449 [2024-07-25 04:10:43.713763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6880) on tqpair=0xd87630 00:27:28.449 [2024-07-25 04:10:43.713779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.449 [2024-07-25 04:10:43.713789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.449 [2024-07-25 04:10:43.713810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.449 [2024-07-25 04:10:43.713817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6a00) on tqpair=0xd87630 00:27:28.449 ===================================================== 00:27:28.449 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.449 ===================================================== 00:27:28.449 Controller Capabilities/Features 00:27:28.449 ================================ 00:27:28.449 Vendor ID: 8086 00:27:28.449 Subsystem Vendor ID: 8086 00:27:28.449 Serial Number: SPDK00000000000001 00:27:28.449 Model Number: SPDK bdev Controller 00:27:28.449 Firmware Version: 24.09 00:27:28.449 Recommended Arb Burst: 6 00:27:28.449 IEEE OUI Identifier: e4 d2 5c 00:27:28.449 Multi-path I/O 00:27:28.449 May have multiple subsystem ports: Yes 00:27:28.449 May have multiple controllers: Yes 00:27:28.449 Associated with SR-IOV VF: No 00:27:28.449 Max Data Transfer Size: 131072 00:27:28.449 Max Number of Namespaces: 32 00:27:28.449 Max Number of I/O Queues: 127 00:27:28.449 NVMe Specification Version (VS): 1.3 00:27:28.449 NVMe Specification Version (Identify): 1.3 00:27:28.449 Maximum Queue Entries: 128 00:27:28.449 Contiguous Queues Required: Yes 00:27:28.449 Arbitration Mechanisms Supported 00:27:28.449 Weighted Round Robin: Not Supported 00:27:28.449 Vendor Specific: Not Supported 00:27:28.449 Reset Timeout: 15000 ms 00:27:28.449 Doorbell Stride: 4 bytes 00:27:28.449 NVM Subsystem Reset: Not Supported 00:27:28.449 Command Sets Supported 00:27:28.449 NVM Command Set: Supported 00:27:28.449 Boot Partition: Not Supported 00:27:28.449 Memory Page Size Minimum: 4096 bytes 00:27:28.449 Memory Page Size Maximum: 4096 bytes 00:27:28.449 Persistent Memory Region: Not Supported 00:27:28.449 Optional Asynchronous Events Supported 00:27:28.449 Namespace Attribute Notices: Supported 00:27:28.449 Firmware Activation Notices: Not Supported 00:27:28.449 ANA Change Notices: Not Supported 00:27:28.449 PLE Aggregate Log Change Notices: Not Supported 00:27:28.449 LBA Status Info Alert Notices: Not Supported 00:27:28.449 EGE Aggregate Log Change Notices: Not Supported 00:27:28.449 Normal NVM Subsystem Shutdown event: Not Supported 00:27:28.449 Zone Descriptor Change Notices: Not Supported 00:27:28.449 Discovery Log Change Notices: Not Supported 00:27:28.449 Controller Attributes 00:27:28.449 128-bit Host Identifier: Supported 00:27:28.449 Non-Operational Permissive Mode: Not Supported 00:27:28.449 NVM Sets: Not Supported 00:27:28.449 Read Recovery Levels: Not Supported 00:27:28.449 Endurance Groups: Not Supported 00:27:28.449 Predictable Latency Mode: Not Supported 00:27:28.449 Traffic Based Keep ALive: Not Supported 00:27:28.449 Namespace Granularity: Not Supported 00:27:28.449 SQ Associations: Not Supported 00:27:28.449 UUID List: Not Supported 00:27:28.449 Multi-Domain Subsystem: Not Supported 00:27:28.449 Fixed Capacity Management: Not Supported 00:27:28.449 Variable Capacity Management: Not Supported 00:27:28.449 Delete Endurance Group: Not Supported 00:27:28.449 Delete NVM Set: Not Supported 00:27:28.449 Extended LBA Formats Supported: Not Supported 00:27:28.449 Flexible Data Placement Supported: Not Supported 00:27:28.449 00:27:28.449 Controller Memory Buffer Support 00:27:28.449 ================================ 00:27:28.449 Supported: No 00:27:28.449 00:27:28.449 Persistent Memory Region Support 00:27:28.449 ================================ 00:27:28.449 Supported: No 00:27:28.449 00:27:28.449 Admin Command Set Attributes 00:27:28.449 ============================ 00:27:28.449 Security Send/Receive: Not Supported 00:27:28.449 Format NVM: Not Supported 00:27:28.449 Firmware Activate/Download: Not Supported 00:27:28.449 Namespace Management: Not Supported 00:27:28.449 Device Self-Test: Not Supported 00:27:28.449 Directives: Not Supported 00:27:28.449 NVMe-MI: Not Supported 00:27:28.449 Virtualization Management: Not Supported 00:27:28.449 Doorbell Buffer Config: Not Supported 00:27:28.449 Get LBA Status Capability: Not Supported 00:27:28.449 Command & Feature Lockdown Capability: Not Supported 00:27:28.449 Abort Command Limit: 4 00:27:28.449 Async Event Request Limit: 4 00:27:28.449 Number of Firmware Slots: N/A 00:27:28.449 Firmware Slot 1 Read-Only: N/A 00:27:28.449 Firmware Activation Without Reset: N/A 00:27:28.449 Multiple Update Detection Support: N/A 00:27:28.449 Firmware Update Granularity: No Information Provided 00:27:28.449 Per-Namespace SMART Log: No 00:27:28.449 Asymmetric Namespace Access Log Page: Not Supported 00:27:28.449 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:28.449 Command Effects Log Page: Supported 00:27:28.449 Get Log Page Extended Data: Supported 00:27:28.449 Telemetry Log Pages: Not Supported 00:27:28.449 Persistent Event Log Pages: Not Supported 00:27:28.449 Supported Log Pages Log Page: May Support 00:27:28.449 Commands Supported & Effects Log Page: Not Supported 00:27:28.449 Feature Identifiers & Effects Log Page:May Support 00:27:28.449 NVMe-MI Commands & Effects Log Page: May Support 00:27:28.449 Data Area 4 for Telemetry Log: Not Supported 00:27:28.449 Error Log Page Entries Supported: 128 00:27:28.449 Keep Alive: Supported 00:27:28.449 Keep Alive Granularity: 10000 ms 00:27:28.449 00:27:28.449 NVM Command Set Attributes 00:27:28.449 ========================== 00:27:28.449 Submission Queue Entry Size 00:27:28.449 Max: 64 00:27:28.449 Min: 64 00:27:28.449 Completion Queue Entry Size 00:27:28.450 Max: 16 00:27:28.450 Min: 16 00:27:28.450 Number of Namespaces: 32 00:27:28.450 Compare Command: Supported 00:27:28.450 Write Uncorrectable Command: Not Supported 00:27:28.450 Dataset Management Command: Supported 00:27:28.450 Write Zeroes Command: Supported 00:27:28.450 Set Features Save Field: Not Supported 00:27:28.450 Reservations: Supported 00:27:28.450 Timestamp: Not Supported 00:27:28.450 Copy: Supported 00:27:28.450 Volatile Write Cache: Present 00:27:28.450 Atomic Write Unit (Normal): 1 00:27:28.450 Atomic Write Unit (PFail): 1 00:27:28.450 Atomic Compare & Write Unit: 1 00:27:28.450 Fused Compare & Write: Supported 00:27:28.450 Scatter-Gather List 00:27:28.450 SGL Command Set: Supported 00:27:28.450 SGL Keyed: Supported 00:27:28.450 SGL Bit Bucket Descriptor: Not Supported 00:27:28.450 SGL Metadata Pointer: Not Supported 00:27:28.450 Oversized SGL: Not Supported 00:27:28.450 SGL Metadata Address: Not Supported 00:27:28.450 SGL Offset: Supported 00:27:28.450 Transport SGL Data Block: Not Supported 00:27:28.450 Replay Protected Memory Block: Not Supported 00:27:28.450 00:27:28.450 Firmware Slot Information 00:27:28.450 ========================= 00:27:28.450 Active slot: 1 00:27:28.450 Slot 1 Firmware Revision: 24.09 00:27:28.450 00:27:28.450 00:27:28.450 Commands Supported and Effects 00:27:28.450 ============================== 00:27:28.450 Admin Commands 00:27:28.450 -------------- 00:27:28.450 Get Log Page (02h): Supported 00:27:28.450 Identify (06h): Supported 00:27:28.450 Abort (08h): Supported 00:27:28.450 Set Features (09h): Supported 00:27:28.450 Get Features (0Ah): Supported 00:27:28.450 Asynchronous Event Request (0Ch): Supported 00:27:28.450 Keep Alive (18h): Supported 00:27:28.450 I/O Commands 00:27:28.450 ------------ 00:27:28.450 Flush (00h): Supported LBA-Change 00:27:28.450 Write (01h): Supported LBA-Change 00:27:28.450 Read (02h): Supported 00:27:28.450 Compare (05h): Supported 00:27:28.450 Write Zeroes (08h): Supported LBA-Change 00:27:28.450 Dataset Management (09h): Supported LBA-Change 00:27:28.450 Copy (19h): Supported LBA-Change 00:27:28.450 00:27:28.450 Error Log 00:27:28.450 ========= 00:27:28.450 00:27:28.450 Arbitration 00:27:28.450 =========== 00:27:28.450 Arbitration Burst: 1 00:27:28.450 00:27:28.450 Power Management 00:27:28.450 ================ 00:27:28.450 Number of Power States: 1 00:27:28.450 Current Power State: Power State #0 00:27:28.450 Power State #0: 00:27:28.450 Max Power: 0.00 W 00:27:28.450 Non-Operational State: Operational 00:27:28.450 Entry Latency: Not Reported 00:27:28.450 Exit Latency: Not Reported 00:27:28.450 Relative Read Throughput: 0 00:27:28.450 Relative Read Latency: 0 00:27:28.450 Relative Write Throughput: 0 00:27:28.450 Relative Write Latency: 0 00:27:28.450 Idle Power: Not Reported 00:27:28.450 Active Power: Not Reported 00:27:28.450 Non-Operational Permissive Mode: Not Supported 00:27:28.450 00:27:28.450 Health Information 00:27:28.450 ================== 00:27:28.450 Critical Warnings: 00:27:28.450 Available Spare Space: OK 00:27:28.450 Temperature: OK 00:27:28.450 Device Reliability: OK 00:27:28.450 Read Only: No 00:27:28.450 Volatile Memory Backup: OK 00:27:28.450 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:28.450 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:28.450 Available Spare: 0% 00:27:28.450 Available Spare Threshold: 0% 00:27:28.450 Life Percentage Used:[2024-07-25 04:10:43.713922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.713934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd87630) 00:27:28.450 [2024-07-25 04:10:43.713944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-25 04:10:43.713968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6a00, cid 7, qid 0 00:27:28.450 [2024-07-25 04:10:43.714142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.450 [2024-07-25 04:10:43.714154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.450 [2024-07-25 04:10:43.714161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6a00) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714210] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:28.450 [2024-07-25 04:10:43.714229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd5f80) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-25 04:10:43.714256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6100) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-25 04:10:43.714272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6280) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-25 04:10:43.714288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6400) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-25 04:10:43.714308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87630) 00:27:28.450 [2024-07-25 04:10:43.714333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-25 04:10:43.714355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6400, cid 3, qid 0 00:27:28.450 [2024-07-25 04:10:43.714497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.450 [2024-07-25 04:10:43.714512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.450 [2024-07-25 04:10:43.714519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6400) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.714537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.714551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87630) 00:27:28.450 [2024-07-25 04:10:43.714562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-25 04:10:43.714588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6400, cid 3, qid 0 00:27:28.450 [2024-07-25 04:10:43.718253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.450 [2024-07-25 04:10:43.718269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.450 [2024-07-25 04:10:43.718277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.718284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6400) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.718291] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:28.450 [2024-07-25 04:10:43.718299] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:28.450 [2024-07-25 04:10:43.718316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.718329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.718336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87630) 00:27:28.450 [2024-07-25 04:10:43.718346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-25 04:10:43.718368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd6400, cid 3, qid 0 00:27:28.450 [2024-07-25 04:10:43.718531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:28.450 [2024-07-25 04:10:43.718543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:28.450 [2024-07-25 04:10:43.718550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:28.450 [2024-07-25 04:10:43.718557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd6400) on tqpair=0xd87630 00:27:28.450 [2024-07-25 04:10:43.718570] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:27:28.450 0% 00:27:28.450 Data Units Read: 0 00:27:28.450 Data Units Written: 0 00:27:28.450 Host Read Commands: 0 00:27:28.450 Host Write Commands: 0 00:27:28.450 Controller Busy Time: 0 minutes 00:27:28.450 Power Cycles: 0 00:27:28.450 Power On Hours: 0 hours 00:27:28.450 Unsafe Shutdowns: 0 00:27:28.450 Unrecoverable Media Errors: 0 00:27:28.450 Lifetime Error Log Entries: 0 00:27:28.450 Warning Temperature Time: 0 minutes 00:27:28.450 Critical Temperature Time: 0 minutes 00:27:28.450 00:27:28.450 Number of Queues 00:27:28.450 ================ 00:27:28.450 Number of I/O Submission Queues: 127 00:27:28.450 Number of I/O Completion Queues: 127 00:27:28.450 00:27:28.450 Active Namespaces 00:27:28.450 ================= 00:27:28.450 Namespace ID:1 00:27:28.450 Error Recovery Timeout: Unlimited 00:27:28.450 Command Set Identifier: NVM (00h) 00:27:28.450 Deallocate: Supported 00:27:28.450 Deallocated/Unwritten Error: Not Supported 00:27:28.450 Deallocated Read Value: Unknown 00:27:28.450 Deallocate in Write Zeroes: Not Supported 00:27:28.451 Deallocated Guard Field: 0xFFFF 00:27:28.451 Flush: Supported 00:27:28.451 Reservation: Supported 00:27:28.451 Namespace Sharing Capabilities: Multiple Controllers 00:27:28.451 Size (in LBAs): 131072 (0GiB) 00:27:28.451 Capacity (in LBAs): 131072 (0GiB) 00:27:28.451 Utilization (in LBAs): 131072 (0GiB) 00:27:28.451 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:28.451 EUI64: ABCDEF0123456789 00:27:28.451 UUID: 2747dc61-16bc-46c1-ad11-67983c6471a4 00:27:28.451 Thin Provisioning: Not Supported 00:27:28.451 Per-NS Atomic Units: Yes 00:27:28.451 Atomic Boundary Size (Normal): 0 00:27:28.451 Atomic Boundary Size (PFail): 0 00:27:28.451 Atomic Boundary Offset: 0 00:27:28.451 Maximum Single Source Range Length: 65535 00:27:28.451 Maximum Copy Length: 65535 00:27:28.451 Maximum Source Range Count: 1 00:27:28.451 NGUID/EUI64 Never Reused: No 00:27:28.451 Namespace Write Protected: No 00:27:28.451 Number of LBA Formats: 1 00:27:28.451 Current LBA Format: LBA Format #00 00:27:28.451 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:28.451 00:27:28.451 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:28.451 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.451 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.451 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.708 rmmod nvme_tcp 00:27:28.708 rmmod nvme_fabrics 00:27:28.708 rmmod nvme_keyring 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 921168 ']' 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 921168 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 921168 ']' 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 921168 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 921168 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 921168' 00:27:28.708 killing process with pid 921168 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 921168 00:27:28.708 04:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 921168 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.966 04:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.866 00:27:30.866 real 0m5.287s 00:27:30.866 user 0m3.958s 00:27:30.866 sys 0m1.840s 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:30.866 ************************************ 00:27:30.866 END TEST nvmf_identify 00:27:30.866 ************************************ 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:30.866 04:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.125 ************************************ 00:27:31.125 START TEST nvmf_perf 00:27:31.125 ************************************ 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:31.125 * Looking for test storage... 00:27:31.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.125 04:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.025 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.026 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.026 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.026 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:27:33.026 00:27:33.026 --- 10.0.0.2 ping statistics --- 00:27:33.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.026 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:33.026 00:27:33.026 --- 10.0.0.1 ping statistics --- 00:27:33.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.026 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=923255 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 923255 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 923255 ']' 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.026 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:33.285 [2024-07-25 04:10:48.354595] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:27:33.285 [2024-07-25 04:10:48.354683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.285 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.285 [2024-07-25 04:10:48.397356] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:33.285 [2024-07-25 04:10:48.428310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.285 [2024-07-25 04:10:48.524259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.285 [2024-07-25 04:10:48.524320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.285 [2024-07-25 04:10:48.524335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.285 [2024-07-25 04:10:48.524347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.285 [2024-07-25 04:10:48.524357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.285 [2024-07-25 04:10:48.524416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.285 [2024-07-25 04:10:48.524442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.285 [2024-07-25 04:10:48.524501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.285 [2024-07-25 04:10:48.524504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:33.543 04:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:36.820 04:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:36.820 04:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:36.820 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:36.820 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:37.077 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:37.077 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:37.077 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:37.077 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:37.077 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:37.335 [2024-07-25 04:10:52.525506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.335 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.592 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:37.592 04:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.849 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:37.850 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:38.416 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.416 [2024-07-25 04:10:53.645609] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.416 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:38.672 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:38.672 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:38.672 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:38.672 04:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:40.043 Initializing NVMe Controllers 00:27:40.043 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:40.043 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:40.043 Initialization complete. Launching workers. 00:27:40.043 ======================================================== 00:27:40.043 Latency(us) 00:27:40.043 Device Information : IOPS MiB/s Average min max 00:27:40.043 PCIE (0000:88:00.0) NSID 1 from core 0: 85969.50 335.82 371.77 42.48 8255.25 00:27:40.043 ======================================================== 00:27:40.043 Total : 85969.50 335.82 371.77 42.48 8255.25 00:27:40.043 00:27:40.043 04:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:40.043 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.412 Initializing NVMe Controllers 00:27:41.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:41.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:41.412 Initialization complete. Launching workers. 00:27:41.412 ======================================================== 00:27:41.412 Latency(us) 00:27:41.412 Device Information : IOPS MiB/s Average min max 00:27:41.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.94 0.43 9325.23 185.01 45977.09 00:27:41.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.97 0.20 20807.88 7950.72 50857.26 00:27:41.412 ======================================================== 00:27:41.412 Total : 158.91 0.62 12936.13 185.01 50857.26 00:27:41.412 00:27:41.412 04:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.784 Initializing NVMe Controllers 00:27:42.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:42.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:42.784 Initialization complete. Launching workers. 00:27:42.784 ======================================================== 00:27:42.784 Latency(us) 00:27:42.784 Device Information : IOPS MiB/s Average min max 00:27:42.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8426.78 32.92 3798.37 688.40 7424.05 00:27:42.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.89 15.11 8307.78 5531.27 15796.71 00:27:42.784 ======================================================== 00:27:42.784 Total : 12293.67 48.02 5216.77 688.40 15796.71 00:27:42.784 00:27:42.784 04:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:42.784 04:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:42.784 04:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.784 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.312 Initializing NVMe Controllers 00:27:45.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.312 Controller IO queue size 128, less than required. 00:27:45.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.312 Controller IO queue size 128, less than required. 00:27:45.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:45.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:45.312 Initialization complete. Launching workers. 00:27:45.312 ======================================================== 00:27:45.312 Latency(us) 00:27:45.312 Device Information : IOPS MiB/s Average min max 00:27:45.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1274.48 318.62 102540.79 59185.98 150516.48 00:27:45.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.99 155.00 216438.12 84306.95 348667.64 00:27:45.313 ======================================================== 00:27:45.313 Total : 1894.47 473.62 139815.19 59185.98 348667.64 00:27:45.313 00:27:45.313 04:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:45.313 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.313 No valid NVMe controllers or AIO or URING devices found 00:27:45.313 Initializing NVMe Controllers 00:27:45.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.313 Controller IO queue size 128, less than required. 00:27:45.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.313 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:45.313 Controller IO queue size 128, less than required. 00:27:45.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.313 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:45.313 WARNING: Some requested NVMe devices were skipped 00:27:45.313 04:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:45.313 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.840 Initializing NVMe Controllers 00:27:47.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.840 Controller IO queue size 128, less than required. 00:27:47.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:47.840 Controller IO queue size 128, less than required. 00:27:47.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:47.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:47.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:47.840 Initialization complete. Launching workers. 00:27:47.840 00:27:47.840 ==================== 00:27:47.840 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:47.840 TCP transport: 00:27:47.840 polls: 19569 00:27:47.840 idle_polls: 10107 00:27:47.840 sock_completions: 9462 00:27:47.840 nvme_completions: 4867 00:27:47.840 submitted_requests: 7338 00:27:47.840 queued_requests: 1 00:27:47.840 00:27:47.840 ==================== 00:27:47.840 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:47.840 TCP transport: 00:27:47.840 polls: 16187 00:27:47.840 idle_polls: 6661 00:27:47.840 sock_completions: 9526 00:27:47.840 nvme_completions: 5299 00:27:47.840 submitted_requests: 7948 00:27:47.840 queued_requests: 1 00:27:47.840 ======================================================== 00:27:47.840 Latency(us) 00:27:47.840 Device Information : IOPS MiB/s Average min max 00:27:47.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1216.37 304.09 108681.77 71718.10 194230.90 00:27:47.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1324.36 331.09 97395.19 36015.53 158822.48 00:27:47.840 ======================================================== 00:27:47.840 Total : 2540.73 635.18 102798.62 36015.53 194230.90 00:27:47.840 00:27:47.840 04:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:47.840 04:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.098 04:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:48.098 04:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:48.098 04:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d9be85fe-9351-4278-86e3-23c5d79a405f 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d9be85fe-9351-4278-86e3-23c5d79a405f 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d9be85fe-9351-4278-86e3-23c5d79a405f 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:52.277 { 00:27:52.277 "uuid": "d9be85fe-9351-4278-86e3-23c5d79a405f", 00:27:52.277 "name": "lvs_0", 00:27:52.277 "base_bdev": "Nvme0n1", 00:27:52.277 "total_data_clusters": 238234, 00:27:52.277 "free_clusters": 238234, 00:27:52.277 "block_size": 512, 00:27:52.277 "cluster_size": 4194304 00:27:52.277 } 00:27:52.277 ]' 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d9be85fe-9351-4278-86e3-23c5d79a405f") .free_clusters' 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d9be85fe-9351-4278-86e3-23c5d79a405f") .cluster_size' 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:52.277 952936 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:52.277 04:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9be85fe-9351-4278-86e3-23c5d79a405f lbd_0 20480 00:27:52.277 04:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=02d7ca73-f8ac-4f49-a4f5-bad9152e73b1 00:27:52.277 04:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 02d7ca73-f8ac-4f49-a4f5-bad9152e73b1 lvs_n_0 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c077ae0f-0256-4182-b04c-0c39a546b411 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c077ae0f-0256-4182-b04c-0c39a546b411 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c077ae0f-0256-4182-b04c-0c39a546b411 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:53.209 { 00:27:53.209 "uuid": "d9be85fe-9351-4278-86e3-23c5d79a405f", 00:27:53.209 "name": "lvs_0", 00:27:53.209 "base_bdev": "Nvme0n1", 00:27:53.209 "total_data_clusters": 238234, 00:27:53.209 "free_clusters": 233114, 00:27:53.209 "block_size": 512, 00:27:53.209 "cluster_size": 4194304 00:27:53.209 }, 00:27:53.209 { 00:27:53.209 "uuid": "c077ae0f-0256-4182-b04c-0c39a546b411", 00:27:53.209 "name": "lvs_n_0", 00:27:53.209 "base_bdev": "02d7ca73-f8ac-4f49-a4f5-bad9152e73b1", 00:27:53.209 "total_data_clusters": 5114, 00:27:53.209 "free_clusters": 5114, 00:27:53.209 "block_size": 512, 00:27:53.209 "cluster_size": 4194304 00:27:53.209 } 00:27:53.209 ]' 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c077ae0f-0256-4182-b04c-0c39a546b411") .free_clusters' 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c077ae0f-0256-4182-b04c-0c39a546b411") .cluster_size' 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:53.209 20456 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:53.209 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c077ae0f-0256-4182-b04c-0c39a546b411 lbd_nest_0 20456 00:27:53.467 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b68d06c7-bce9-41e1-9975-90cacab2e4e5 00:27:53.467 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.724 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:53.724 04:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b68d06c7-bce9-41e1-9975-90cacab2e4e5 00:27:53.981 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.238 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:54.238 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:54.238 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:54.238 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:54.238 04:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.238 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.468 Initializing NVMe Controllers 00:28:06.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.468 Initialization complete. Launching workers. 00:28:06.468 ======================================================== 00:28:06.468 Latency(us) 00:28:06.468 Device Information : IOPS MiB/s Average min max 00:28:06.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.29 0.02 22137.62 203.91 46763.95 00:28:06.468 ======================================================== 00:28:06.468 Total : 45.29 0.02 22137.62 203.91 46763.95 00:28:06.468 00:28:06.468 04:11:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:06.468 04:11:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:06.468 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.422 Initializing NVMe Controllers 00:28:16.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.422 Initialization complete. Launching workers. 00:28:16.422 ======================================================== 00:28:16.422 Latency(us) 00:28:16.422 Device Information : IOPS MiB/s Average min max 00:28:16.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.69 10.21 12241.16 5023.39 47898.05 00:28:16.422 ======================================================== 00:28:16.422 Total : 81.69 10.21 12241.16 5023.39 47898.05 00:28:16.422 00:28:16.422 04:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:16.422 04:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:16.422 04:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.422 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.436 Initializing NVMe Controllers 00:28:26.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.437 Initialization complete. Launching workers. 00:28:26.437 ======================================================== 00:28:26.437 Latency(us) 00:28:26.437 Device Information : IOPS MiB/s Average min max 00:28:26.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7321.30 3.57 4372.23 289.85 11170.10 00:28:26.437 ======================================================== 00:28:26.437 Total : 7321.30 3.57 4372.23 289.85 11170.10 00:28:26.437 00:28:26.437 04:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:26.437 04:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.437 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.396 Initializing NVMe Controllers 00:28:36.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.396 Initialization complete. Launching workers. 00:28:36.396 ======================================================== 00:28:36.396 Latency(us) 00:28:36.396 Device Information : IOPS MiB/s Average min max 00:28:36.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2431.60 303.95 13169.74 668.72 30297.53 00:28:36.396 ======================================================== 00:28:36.396 Total : 2431.60 303.95 13169.74 668.72 30297.53 00:28:36.396 00:28:36.397 04:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:36.397 04:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:36.397 04:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.397 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.357 Initializing NVMe Controllers 00:28:46.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.357 Controller IO queue size 128, less than required. 00:28:46.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.357 Initialization complete. Launching workers. 00:28:46.357 ======================================================== 00:28:46.357 Latency(us) 00:28:46.357 Device Information : IOPS MiB/s Average min max 00:28:46.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11845.12 5.78 10808.36 1736.63 25966.15 00:28:46.357 ======================================================== 00:28:46.357 Total : 11845.12 5.78 10808.36 1736.63 25966.15 00:28:46.357 00:28:46.357 04:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:46.357 04:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.357 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.544 Initializing NVMe Controllers 00:28:58.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.544 Controller IO queue size 128, less than required. 00:28:58.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.544 Initialization complete. Launching workers. 00:28:58.544 ======================================================== 00:28:58.544 Latency(us) 00:28:58.544 Device Information : IOPS MiB/s Average min max 00:28:58.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.19 150.15 107014.79 15604.12 214854.21 00:28:58.544 ======================================================== 00:28:58.544 Total : 1201.19 150.15 107014.79 15604.12 214854.21 00:28:58.544 00:28:58.544 04:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.544 04:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b68d06c7-bce9-41e1-9975-90cacab2e4e5 00:28:58.544 04:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:58.544 04:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02d7ca73-f8ac-4f49-a4f5-bad9152e73b1 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.544 rmmod nvme_tcp 00:28:58.544 rmmod nvme_fabrics 00:28:58.544 rmmod nvme_keyring 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 923255 ']' 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 923255 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 923255 ']' 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 923255 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923255 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923255' 00:28:58.544 killing process with pid 923255 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 923255 00:28:58.544 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 923255 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.919 04:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:02.455 00:29:02.455 real 1m31.026s 00:29:02.455 user 5m34.334s 00:29:02.455 sys 0m15.925s 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:02.455 ************************************ 00:29:02.455 END TEST nvmf_perf 00:29:02.455 ************************************ 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.455 ************************************ 00:29:02.455 START TEST nvmf_fio_host 00:29:02.455 ************************************ 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:02.455 * Looking for test storage... 00:29:02.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.455 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.456 04:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.358 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:04.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:29:04.359 00:29:04.359 --- 10.0.0.2 ping statistics --- 00:29:04.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.359 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:04.359 00:29:04.359 --- 10.0.0.1 ping statistics --- 00:29:04.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.359 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=935828 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 935828 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 935828 ']' 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.359 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.360 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.360 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.360 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.360 [2024-07-25 04:12:19.483431] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:29:04.360 [2024-07-25 04:12:19.483507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.360 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.360 [2024-07-25 04:12:19.524010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:04.360 [2024-07-25 04:12:19.554465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.360 [2024-07-25 04:12:19.651296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.360 [2024-07-25 04:12:19.651352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.360 [2024-07-25 04:12:19.651379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.360 [2024-07-25 04:12:19.651392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.360 [2024-07-25 04:12:19.651403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.360 [2024-07-25 04:12:19.651468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.360 [2024-07-25 04:12:19.651523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.360 [2024-07-25 04:12:19.651568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.360 [2024-07-25 04:12:19.651573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.617 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.617 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:04.617 04:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:04.875 [2024-07-25 04:12:20.019507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.875 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:04.875 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.875 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.875 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:05.133 Malloc1 00:29:05.133 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.402 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:05.684 04:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.940 [2024-07-25 04:12:21.171731] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.940 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:06.198 04:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:06.455 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:06.455 fio-3.35 00:29:06.455 Starting 1 thread 00:29:06.455 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.977 00:29:08.977 test: (groupid=0, jobs=1): err= 0: pid=936191: Thu Jul 25 04:12:24 2024 00:29:08.977 read: IOPS=8964, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:29:08.977 slat (usec): min=2, max=143, avg= 2.59, stdev= 1.80 00:29:08.977 clat (usec): min=2498, max=13044, avg=7882.39, stdev=616.62 00:29:08.977 lat (usec): min=2528, max=13046, avg=7884.98, stdev=616.49 00:29:08.977 clat percentiles (usec): 00:29:08.977 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:29:08.977 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:29:08.977 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:29:08.977 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[12387], 00:29:08.977 | 99.99th=[13042] 00:29:08.977 bw ( KiB/s): min=34744, max=36456, per=99.98%, avg=35850.00, stdev=756.54, samples=4 00:29:08.977 iops : min= 8686, max= 9114, avg=8962.50, stdev=189.13, samples=4 00:29:08.977 write: IOPS=8981, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec); 0 zone resets 00:29:08.977 slat (usec): min=2, max=140, avg= 2.71, stdev= 1.36 00:29:08.977 clat (usec): min=1421, max=12670, avg=6345.15, stdev=552.52 00:29:08.977 lat (usec): min=1429, max=12673, avg=6347.86, stdev=552.45 00:29:08.977 clat percentiles (usec): 00:29:08.977 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:29:08.977 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:29:08.977 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7111], 00:29:08.977 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[11338], 99.95th=[11600], 00:29:08.977 | 99.99th=[12518] 00:29:08.977 bw ( KiB/s): min=35392, max=36496, per=100.00%, avg=35928.00, stdev=494.45, samples=4 00:29:08.977 iops : min= 8848, max= 9124, avg=8982.00, stdev=123.61, samples=4 00:29:08.977 lat (msec) : 2=0.03%, 4=0.11%, 10=99.69%, 20=0.17% 00:29:08.977 cpu : usr=57.88%, sys=37.69%, ctx=72, majf=0, minf=38 00:29:08.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:08.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:08.977 issued rwts: total=17991,18025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:08.977 00:29:08.977 Run status group 0 (all jobs): 00:29:08.977 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:29:08.977 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:08.977 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:09.234 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:09.234 fio-3.35 00:29:09.234 Starting 1 thread 00:29:09.234 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.759 00:29:11.759 test: (groupid=0, jobs=1): err= 0: pid=936640: Thu Jul 25 04:12:26 2024 00:29:11.759 read: IOPS=8466, BW=132MiB/s (139MB/s)(265MiB/2002msec) 00:29:11.759 slat (nsec): min=2843, max=97166, avg=3748.95, stdev=1646.30 00:29:11.759 clat (usec): min=2301, max=16969, avg=8828.83, stdev=2089.14 00:29:11.759 lat (usec): min=2305, max=16973, avg=8832.58, stdev=2089.22 00:29:11.759 clat percentiles (usec): 00:29:11.759 | 1.00th=[ 4621], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7046], 00:29:11.759 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:29:11.759 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12256], 00:29:11.759 | 99.00th=[14222], 99.50th=[15008], 99.90th=[16909], 99.95th=[16909], 00:29:11.759 | 99.99th=[16909] 00:29:11.759 bw ( KiB/s): min=56832, max=77056, per=51.69%, avg=70016.00, stdev=9288.75, samples=4 00:29:11.759 iops : min= 3552, max= 4816, avg=4376.00, stdev=580.55, samples=4 00:29:11.759 write: IOPS=4890, BW=76.4MiB/s (80.1MB/s)(142MiB/1863msec); 0 zone resets 00:29:11.759 slat (usec): min=30, max=202, avg=34.20, stdev= 6.20 00:29:11.759 clat (usec): min=4653, max=19122, avg=11011.09, stdev=2021.09 00:29:11.759 lat (usec): min=4685, max=19168, avg=11045.29, stdev=2021.69 00:29:11.759 clat percentiles (usec): 00:29:11.759 | 1.00th=[ 7177], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9372], 00:29:11.759 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:29:11.759 | 70.00th=[11863], 80.00th=[12780], 90.00th=[14091], 95.00th=[14746], 00:29:11.759 | 99.00th=[15926], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:29:11.759 | 99.99th=[19006] 00:29:11.759 bw ( KiB/s): min=58912, max=79328, per=92.81%, avg=72624.00, stdev=9424.08, samples=4 00:29:11.759 iops : min= 3682, max= 4958, avg=4539.00, stdev=589.00, samples=4 00:29:11.759 lat (msec) : 4=0.18%, 10=56.76%, 20=43.05% 00:29:11.759 cpu : usr=76.21%, sys=21.09%, ctx=34, majf=0, minf=56 00:29:11.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:11.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:11.759 issued rwts: total=16949,9111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:11.759 00:29:11.759 Run status group 0 (all jobs): 00:29:11.759 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=265MiB (278MB), run=2002-2002msec 00:29:11.759 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=142MiB (149MB), run=1863-1863msec 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:11.759 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:11.759 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:11.759 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:11.759 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:15.040 Nvme0n1 00:29:15.040 04:12:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=21d1a1d1-98c6-4180-adb0-85a397a0f0de 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 21d1a1d1-98c6-4180-adb0-85a397a0f0de 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=21d1a1d1-98c6-4180-adb0-85a397a0f0de 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:18.312 04:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:18.312 { 00:29:18.312 "uuid": "21d1a1d1-98c6-4180-adb0-85a397a0f0de", 00:29:18.312 "name": "lvs_0", 00:29:18.312 "base_bdev": "Nvme0n1", 00:29:18.312 "total_data_clusters": 930, 00:29:18.312 "free_clusters": 930, 00:29:18.312 "block_size": 512, 00:29:18.312 "cluster_size": 1073741824 00:29:18.312 } 00:29:18.312 ]' 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="21d1a1d1-98c6-4180-adb0-85a397a0f0de") .free_clusters' 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="21d1a1d1-98c6-4180-adb0-85a397a0f0de") .cluster_size' 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:18.312 952320 00:29:18.312 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:18.569 1ee715c2-6679-48ff-867d-3d3193046da9 00:29:18.569 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:18.826 04:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:19.083 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:19.340 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.597 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:19.597 fio-3.35 00:29:19.597 Starting 1 thread 00:29:19.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.119 00:29:22.119 test: (groupid=0, jobs=1): err= 0: pid=937919: Thu Jul 25 04:12:37 2024 00:29:22.119 read: IOPS=5756, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec) 00:29:22.119 slat (nsec): min=1976, max=141207, avg=2536.66, stdev=1892.04 00:29:22.119 clat (usec): min=761, max=171461, avg=12246.25, stdev=11837.27 00:29:22.119 lat (usec): min=764, max=171512, avg=12248.79, stdev=11837.53 00:29:22.119 clat percentiles (msec): 00:29:22.119 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:29:22.119 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:29:22.119 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 13], 00:29:22.119 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:22.119 | 99.99th=[ 171] 00:29:22.119 bw ( KiB/s): min=16096, max=25384, per=99.82%, avg=22984.00, stdev=4593.01, samples=4 00:29:22.119 iops : min= 4024, max= 6346, avg=5746.00, stdev=1148.25, samples=4 00:29:22.119 write: IOPS=5744, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec); 0 zone resets 00:29:22.119 slat (usec): min=2, max=142, avg= 2.67, stdev= 1.61 00:29:22.119 clat (usec): min=331, max=169620, avg=9871.78, stdev=11132.22 00:29:22.119 lat (usec): min=334, max=169625, avg=9874.44, stdev=11132.48 00:29:22.119 clat percentiles (msec): 00:29:22.119 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:29:22.119 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:29:22.119 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:29:22.119 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:29:22.119 | 99.99th=[ 169] 00:29:22.119 bw ( KiB/s): min=17128, max=25024, per=99.96%, avg=22970.00, stdev=3895.48, samples=4 00:29:22.119 iops : min= 4282, max= 6256, avg=5742.50, stdev=973.87, samples=4 00:29:22.119 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:22.119 lat (msec) : 2=0.03%, 4=0.12%, 10=46.86%, 20=52.42%, 250=0.55% 00:29:22.119 cpu : usr=57.77%, sys=38.84%, ctx=115, majf=0, minf=38 00:29:22.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:22.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:22.119 issued rwts: total=11564,11541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:22.119 00:29:22.119 Run status group 0 (all jobs): 00:29:22.120 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:29:22.120 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:29:22.120 04:12:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:22.120 04:12:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:23.526 { 00:29:23.526 "uuid": "21d1a1d1-98c6-4180-adb0-85a397a0f0de", 00:29:23.526 "name": "lvs_0", 00:29:23.526 "base_bdev": "Nvme0n1", 00:29:23.526 "total_data_clusters": 930, 00:29:23.526 "free_clusters": 0, 00:29:23.526 "block_size": 512, 00:29:23.526 "cluster_size": 1073741824 00:29:23.526 }, 00:29:23.526 { 00:29:23.526 "uuid": "3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f", 00:29:23.526 "name": "lvs_n_0", 00:29:23.526 "base_bdev": "1ee715c2-6679-48ff-867d-3d3193046da9", 00:29:23.526 "total_data_clusters": 237847, 00:29:23.526 "free_clusters": 237847, 00:29:23.526 "block_size": 512, 00:29:23.526 "cluster_size": 4194304 00:29:23.526 } 00:29:23.526 ]' 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f") .free_clusters' 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3f3ca24c-6ab1-42b0-bc79-5d3a14923f3f") .cluster_size' 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:23.526 951388 00:29:23.526 04:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:24.459 86086dd9-62f7-495d-83f5-474e9f28006b 00:29:24.459 04:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:24.459 04:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:24.716 04:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:24.974 04:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:25.232 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:25.232 fio-3.35 00:29:25.232 Starting 1 thread 00:29:25.232 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.757 00:29:27.757 test: (groupid=0, jobs=1): err= 0: pid=938653: Thu Jul 25 04:12:42 2024 00:29:27.757 read: IOPS=5958, BW=23.3MiB/s (24.4MB/s)(46.7MiB/2007msec) 00:29:27.757 slat (usec): min=2, max=130, avg= 2.66, stdev= 1.79 00:29:27.757 clat (usec): min=4298, max=20010, avg=11862.73, stdev=1016.57 00:29:27.757 lat (usec): min=4314, max=20013, avg=11865.39, stdev=1016.47 00:29:27.757 clat percentiles (usec): 00:29:27.757 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:29:27.757 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:29:27.757 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:29:27.757 | 99.00th=[14091], 99.50th=[14353], 99.90th=[19268], 99.95th=[19530], 00:29:27.757 | 99.99th=[20055] 00:29:27.757 bw ( KiB/s): min=22720, max=24232, per=99.70%, avg=23764.00, stdev=708.43, samples=4 00:29:27.757 iops : min= 5680, max= 6058, avg=5941.00, stdev=177.11, samples=4 00:29:27.757 write: IOPS=5949, BW=23.2MiB/s (24.4MB/s)(46.6MiB/2007msec); 0 zone resets 00:29:27.757 slat (usec): min=2, max=102, avg= 2.79, stdev= 1.43 00:29:27.757 clat (usec): min=2036, max=18047, avg=9491.11, stdev=865.15 00:29:27.757 lat (usec): min=2042, max=18050, avg=9493.90, stdev=865.12 00:29:27.757 clat percentiles (usec): 00:29:27.757 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:29:27.757 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:29:27.757 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:29:27.757 | 99.00th=[11338], 99.50th=[11600], 99.90th=[15270], 99.95th=[16909], 00:29:27.757 | 99.99th=[16909] 00:29:27.757 bw ( KiB/s): min=23584, max=23928, per=99.93%, avg=23782.00, stdev=143.61, samples=4 00:29:27.757 iops : min= 5896, max= 5982, avg=5945.50, stdev=35.90, samples=4 00:29:27.757 lat (msec) : 4=0.05%, 10=38.41%, 20=61.54%, 50=0.01% 00:29:27.757 cpu : usr=58.82%, sys=37.59%, ctx=109, majf=0, minf=38 00:29:27.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:27.757 issued rwts: total=11959,11941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:27.757 00:29:27.757 Run status group 0 (all jobs): 00:29:27.757 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (49.0MB), run=2007-2007msec 00:29:27.757 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.6MiB (48.9MB), run=2007-2007msec 00:29:27.757 04:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:27.757 04:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:27.757 04:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:31.932 04:12:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:31.932 04:12:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:35.206 04:12:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:35.206 04:12:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.101 rmmod nvme_tcp 00:29:37.101 rmmod nvme_fabrics 00:29:37.101 rmmod nvme_keyring 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 935828 ']' 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 935828 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 935828 ']' 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 935828 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935828 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.101 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935828' 00:29:37.101 killing process with pid 935828 00:29:37.102 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 935828 00:29:37.102 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 935828 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.360 04:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.890 00:29:39.890 real 0m37.308s 00:29:39.890 user 2m22.986s 00:29:39.890 sys 0m7.213s 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.890 ************************************ 00:29:39.890 END TEST nvmf_fio_host 00:29:39.890 ************************************ 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.890 ************************************ 00:29:39.890 START TEST nvmf_failover 00:29:39.890 ************************************ 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:39.890 * Looking for test storage... 00:29:39.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.890 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.891 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.891 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.891 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.891 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.891 04:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.262 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:29:41.522 00:29:41.522 --- 10.0.0.2 ping statistics --- 00:29:41.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.522 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:29:41.522 00:29:41.522 --- 10.0.0.1 ping statistics --- 00:29:41.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.522 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=941893 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 941893 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 941893 ']' 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.522 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.522 [2024-07-25 04:12:56.705555] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:29:41.522 [2024-07-25 04:12:56.705648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.522 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.522 [2024-07-25 04:12:56.745412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:41.522 [2024-07-25 04:12:56.772672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.788 [2024-07-25 04:12:56.859627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.788 [2024-07-25 04:12:56.859679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.788 [2024-07-25 04:12:56.859702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.788 [2024-07-25 04:12:56.859713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.788 [2024-07-25 04:12:56.859722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.788 [2024-07-25 04:12:56.859814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.788 [2024-07-25 04:12:56.859877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.788 [2024-07-25 04:12:56.859880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.788 04:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:42.046 [2024-07-25 04:12:57.215101] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.046 04:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:42.304 Malloc0 00:29:42.304 04:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.561 04:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.818 04:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.075 [2024-07-25 04:12:58.224490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.075 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:43.331 [2024-07-25 04:12:58.473166] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:43.331 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:43.589 [2024-07-25 04:12:58.718017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=942185 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 942185 /var/tmp/bdevperf.sock 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 942185 ']' 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.589 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:43.846 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.846 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:43.846 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:44.410 NVMe0n1 00:29:44.410 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:44.667 00:29:44.667 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=942319 00:29:44.667 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:44.667 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:45.600 04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.859 [2024-07-25 04:13:01.073490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.073993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 [2024-07-25 04:13:01.074404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc480 is same with the state(5) to be set 00:29:45.859 04:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:49.136 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.393 00:29:49.393 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:49.650 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:52.929 04:13:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.929 [2024-07-25 04:13:08.110038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.929 04:13:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:53.862 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:54.120 [2024-07-25 04:13:09.361775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.361999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 [2024-07-25 04:13:09.362211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdff0 is same with the state(5) to be set 00:29:54.120 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 942319 00:30:00.688 0 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 942185 ']' 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 942185' 00:30:00.688 killing process with pid 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 942185 00:30:00.688 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:00.688 [2024-07-25 04:12:58.782532] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:30:00.688 [2024-07-25 04:12:58.782638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942185 ] 00:30:00.688 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.688 [2024-07-25 04:12:58.814489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:00.688 [2024-07-25 04:12:58.843602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.688 [2024-07-25 04:12:58.935038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.688 Running I/O for 15 seconds... 00:30:00.688 [2024-07-25 04:13:01.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.688 [2024-07-25 04:13:01.077176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.688 [2024-07-25 04:13:01.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-07-25 04:13:01.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.688 [2024-07-25 04:13:01.077619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.077978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.077991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.689 [2024-07-25 04:13:01.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.689 [2024-07-25 04:13:01.078796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.078978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.078993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.690 [2024-07-25 04:13:01.079095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.690 [2024-07-25 04:13:01.079122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.690 [2024-07-25 04:13:01.079706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.690 [2024-07-25 04:13:01.079750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:30:00.690 [2024-07-25 04:13:01.079763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.690 [2024-07-25 04:13:01.079858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.690 [2024-07-25 04:13:01.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.690 [2024-07-25 04:13:01.079919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.690 [2024-07-25 04:13:01.079946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.690 [2024-07-25 04:13:01.079960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c850 is same with the state(5) to be set 00:30:00.690 [2024-07-25 04:13:01.080150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.080963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.080974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.080997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.691 [2024-07-25 04:13:01.081265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.691 [2024-07-25 04:13:01.081279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.691 [2024-07-25 04:13:01.081291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:30:00.691 [2024-07-25 04:13:01.081303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.081948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.081958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.081970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.081983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.692 [2024-07-25 04:13:01.082318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.692 [2024-07-25 04:13:01.082329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:30:00.692 [2024-07-25 04:13:01.082342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.692 [2024-07-25 04:13:01.082355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.693 [2024-07-25 04:13:01.082365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.693 [2024-07-25 04:13:01.082377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:30:00.693 [2024-07-25 04:13:01.082394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.693 [2024-07-25 04:13:01.082408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.693 [2024-07-25 04:13:01.082419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.693 [2024-07-25 04:13:01.082430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:30:00.693 [2024-07-25 04:13:01.082443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.082965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.082977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.082989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.082999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:30:00.694 [2024-07-25 04:13:01.083432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.694 [2024-07-25 04:13:01.083450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.694 [2024-07-25 04:13:01.083461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.694 [2024-07-25 04:13:01.083472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.083952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.083967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.083979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.083992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:30:00.695 [2024-07-25 04:13:01.084523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.695 [2024-07-25 04:13:01.084536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.695 [2024-07-25 04:13:01.084547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.695 [2024-07-25 04:13:01.084558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.084880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.084892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.084902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.090959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.090972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.090984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.090994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.091005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:30:00.696 [2024-07-25 04:13:01.091018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.696 [2024-07-25 04:13:01.091031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.696 [2024-07-25 04:13:01.091041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.696 [2024-07-25 04:13:01.091052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.697 [2024-07-25 04:13:01.091846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.697 [2024-07-25 04:13:01.091857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:30:00.697 [2024-07-25 04:13:01.091870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:01.091927] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x135fcb0 was disconnected and freed. reset controller. 00:30:00.697 [2024-07-25 04:13:01.091945] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:00.697 [2024-07-25 04:13:01.091959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.697 [2024-07-25 04:13:01.092030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136c850 (9): Bad file descriptor 00:30:00.697 [2024-07-25 04:13:01.095305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.697 [2024-07-25 04:13:01.258556] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.697 [2024-07-25 04:13:04.815281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:04.815385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:04.815419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:04.815448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:04.815478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.697 [2024-07-25 04:13:04.815523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.697 [2024-07-25 04:13:04.815548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.815591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.815981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.815994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.816021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.698 [2024-07-25 04:13:04.816048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.698 [2024-07-25 04:13:04.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.698 [2024-07-25 04:13:04.816536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.699 [2024-07-25 04:13:04.816775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.699 [2024-07-25 04:13:04.817644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.699 [2024-07-25 04:13:04.817661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.817973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.817988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.700 [2024-07-25 04:13:04.818711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.700 [2024-07-25 04:13:04.818739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.700 [2024-07-25 04:13:04.818801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.700 [2024-07-25 04:13:04.818815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.818986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.818999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:04.819173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1390670 is same with the state(5) to be set 00:30:00.701 [2024-07-25 04:13:04.819204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.701 [2024-07-25 04:13:04.819216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.701 [2024-07-25 04:13:04.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113192 len:8 PRP1 0x0 PRP2 0x0 00:30:00.701 [2024-07-25 04:13:04.819249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819325] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1390670 was disconnected and freed. reset controller. 00:30:00.701 [2024-07-25 04:13:04.819345] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:00.701 [2024-07-25 04:13:04.819380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.701 [2024-07-25 04:13:04.819408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.701 [2024-07-25 04:13:04.819448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.701 [2024-07-25 04:13:04.819475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.701 [2024-07-25 04:13:04.819502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:04.819515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.701 [2024-07-25 04:13:04.822872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.701 [2024-07-25 04:13:04.822915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136c850 (9): Bad file descriptor 00:30:00.701 [2024-07-25 04:13:04.904813] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.701 [2024-07-25 04:13:09.363542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.363977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.363991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.364004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.364031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.364058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.701 [2024-07-25 04:13:09.364089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.701 [2024-07-25 04:13:09.364111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.702 [2024-07-25 04:13:09.364566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.364990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.365017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.365044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.365071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.365097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.702 [2024-07-25 04:13:09.365124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.702 [2024-07-25 04:13:09.365138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.703 [2024-07-25 04:13:09.365815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.703 [2024-07-25 04:13:09.365829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.365884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.365911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.365939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.365968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.704 [2024-07-25 04:13:09.365985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53168 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53176 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53184 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53192 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53200 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53208 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53216 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53224 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53232 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53240 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53248 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53256 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.704 [2024-07-25 04:13:09.366670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53264 len:8 PRP1 0x0 PRP2 0x0 00:30:00.704 [2024-07-25 04:13:09.366682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.704 [2024-07-25 04:13:09.366695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.704 [2024-07-25 04:13:09.366705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53272 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53280 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53288 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53296 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53304 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.366957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53312 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.366970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.366983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.366993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53320 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53328 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53336 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53344 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53352 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53360 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53368 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53376 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53384 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53392 len:8 PRP1 0x0 PRP2 0x0 00:30:00.705 [2024-07-25 04:13:09.367495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.705 [2024-07-25 04:13:09.367508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.705 [2024-07-25 04:13:09.367518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.705 [2024-07-25 04:13:09.367529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53400 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53408 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53416 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53424 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53432 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53440 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53448 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53456 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53464 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.367961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.367974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.367984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.367998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53472 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53480 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53488 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53496 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52728 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52744 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.706 [2024-07-25 04:13:09.368352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.706 [2024-07-25 04:13:09.368363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52752 len:8 PRP1 0x0 PRP2 0x0 00:30:00.706 [2024-07-25 04:13:09.368376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.706 [2024-07-25 04:13:09.368389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.707 [2024-07-25 04:13:09.368403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.707 [2024-07-25 04:13:09.368414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52760 len:8 PRP1 0x0 PRP2 0x0 00:30:00.707 [2024-07-25 04:13:09.368427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.707 [2024-07-25 04:13:09.368451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.707 [2024-07-25 04:13:09.368462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52768 len:8 PRP1 0x0 PRP2 0x0 00:30:00.707 [2024-07-25 04:13:09.368475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:00.707 [2024-07-25 04:13:09.368499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:00.707 [2024-07-25 04:13:09.368510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52776 len:8 PRP1 0x0 PRP2 0x0 00:30:00.707 [2024-07-25 04:13:09.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368606] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1390670 was disconnected and freed. reset controller. 00:30:00.707 [2024-07-25 04:13:09.368625] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:00.707 [2024-07-25 04:13:09.368657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.707 [2024-07-25 04:13:09.368691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.707 [2024-07-25 04:13:09.368721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.707 [2024-07-25 04:13:09.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.707 [2024-07-25 04:13:09.368776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.707 [2024-07-25 04:13:09.368789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.707 [2024-07-25 04:13:09.368829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136c850 (9): Bad file descriptor 00:30:00.707 [2024-07-25 04:13:09.372122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.707 [2024-07-25 04:13:09.447984] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.707 00:30:00.707 Latency(us) 00:30:00.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.707 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.707 Verification LBA range: start 0x0 length 0x4000 00:30:00.707 NVMe0n1 : 15.01 8300.02 32.42 830.25 0.00 13991.57 770.65 23301.69 00:30:00.707 =================================================================================================================== 00:30:00.707 Total : 8300.02 32.42 830.25 0.00 13991.57 770.65 23301.69 00:30:00.707 Received shutdown signal, test time was about 15.000000 seconds 00:30:00.707 00:30:00.707 Latency(us) 00:30:00.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.707 =================================================================================================================== 00:30:00.707 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=944150 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 944150 /var/tmp/bdevperf.sock 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 944150 ']' 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:00.707 [2024-07-25 04:13:15.766579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.707 04:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:00.965 [2024-07-25 04:13:16.003177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:00.965 04:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.221 NVMe0n1 00:30:01.221 04:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.478 00:30:01.478 04:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.042 00:30:02.042 04:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.042 04:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:02.298 04:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.554 04:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:05.828 04:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.828 04:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:05.828 04:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=944820 00:30:05.828 04:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.828 04:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 944820 00:30:06.760 0 00:30:07.018 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:07.018 [2024-07-25 04:13:15.289426] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:30:07.018 [2024-07-25 04:13:15.289534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944150 ] 00:30:07.018 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.018 [2024-07-25 04:13:15.322626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:07.018 [2024-07-25 04:13:15.352258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.018 [2024-07-25 04:13:15.436190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.018 [2024-07-25 04:13:17.637672] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:07.018 [2024-07-25 04:13:17.637766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.018 [2024-07-25 04:13:17.637790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.018 [2024-07-25 04:13:17.637807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.018 [2024-07-25 04:13:17.637821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.018 [2024-07-25 04:13:17.637836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.018 [2024-07-25 04:13:17.637850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.018 [2024-07-25 04:13:17.637866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.018 [2024-07-25 04:13:17.637880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.018 [2024-07-25 04:13:17.637894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:07.018 [2024-07-25 04:13:17.637941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:07.018 [2024-07-25 04:13:17.637974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524850 (9): Bad file descriptor 00:30:07.018 [2024-07-25 04:13:17.646966] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:07.018 Running I/O for 1 seconds... 00:30:07.018 00:30:07.018 Latency(us) 00:30:07.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.018 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:07.018 Verification LBA range: start 0x0 length 0x4000 00:30:07.018 NVMe0n1 : 1.00 8648.35 33.78 0.00 0.00 14733.13 2439.40 11796.48 00:30:07.018 =================================================================================================================== 00:30:07.018 Total : 8648.35 33.78 0.00 0.00 14733.13 2439.40 11796.48 00:30:07.018 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:07.018 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:07.018 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.276 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:07.276 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:07.533 04:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.790 04:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:11.061 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:11.061 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 944150 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 944150 ']' 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 944150 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944150 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944150' 00:30:11.318 killing process with pid 944150 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 944150 00:30:11.318 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 944150 00:30:11.576 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:11.576 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.833 rmmod nvme_tcp 00:30:11.833 rmmod nvme_fabrics 00:30:11.833 rmmod nvme_keyring 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 941893 ']' 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 941893 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 941893 ']' 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 941893 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 941893 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 941893' 00:30:11.833 killing process with pid 941893 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 941893 00:30:11.833 04:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 941893 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.091 04:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.003 04:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:14.003 00:30:14.003 real 0m34.678s 00:30:14.003 user 2m1.086s 00:30:14.003 sys 0m6.411s 00:30:14.003 04:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.003 04:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.003 ************************************ 00:30:14.003 END TEST nvmf_failover 00:30:14.003 ************************************ 00:30:14.261 04:13:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:14.261 04:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:14.261 04:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.261 04:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.262 ************************************ 00:30:14.262 START TEST nvmf_host_discovery 00:30:14.262 ************************************ 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:14.262 * Looking for test storage... 00:30:14.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.262 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:16.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:16.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:16.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:16.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.177 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:30:16.436 00:30:16.436 --- 10.0.0.2 ping statistics --- 00:30:16.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.436 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:16.436 00:30:16.436 --- 10.0.0.1 ping statistics --- 00:30:16.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.436 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=947415 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 947415 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 947415 ']' 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.436 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.436 [2024-07-25 04:13:31.601444] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:30:16.436 [2024-07-25 04:13:31.601539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.436 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.436 [2024-07-25 04:13:31.639335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:16.436 [2024-07-25 04:13:31.665334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.694 [2024-07-25 04:13:31.754194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.694 [2024-07-25 04:13:31.754247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.694 [2024-07-25 04:13:31.754284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.694 [2024-07-25 04:13:31.754295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.694 [2024-07-25 04:13:31.754305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.694 [2024-07-25 04:13:31.754342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.694 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.694 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:16.694 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.694 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.694 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 [2024-07-25 04:13:31.894445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 [2024-07-25 04:13:31.902685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 null0 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 null1 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=947445 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 947445 /tmp/host.sock 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 947445 ']' 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:16.695 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.695 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.695 [2024-07-25 04:13:31.978119] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:30:16.695 [2024-07-25 04:13:31.978200] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947445 ] 00:30:16.953 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.953 [2024-07-25 04:13:32.012262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:16.953 [2024-07-25 04:13:32.044875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.953 [2024-07-25 04:13:32.136731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.211 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:17.212 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.470 [2024-07-25 04:13:32.560468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:17.470 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:17.471 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:18.036 [2024-07-25 04:13:33.281914] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:18.036 [2024-07-25 04:13:33.281952] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:18.036 [2024-07-25 04:13:33.281978] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:18.293 [2024-07-25 04:13:33.368245] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:18.293 [2024-07-25 04:13:33.555323] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:18.293 [2024-07-25 04:13:33.555354] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.550 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 [2024-07-25 04:13:34.012781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.809 [2024-07-25 04:13:34.013432] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:18.809 [2024-07-25 04:13:34.013468] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:18.809 [2024-07-25 04:13:34.101189] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:18.809 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.810 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:19.067 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.067 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:19.067 04:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:19.325 [2024-07-25 04:13:34.410732] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:19.325 [2024-07-25 04:13:34.410759] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:19.325 [2024-07-25 04:13:34.410770] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:19.888 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.146 [2024-07-25 04:13:35.233354] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:20.146 [2024-07-25 04:13:35.233390] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:20.146 [2024-07-25 04:13:35.236524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.146 [2024-07-25 04:13:35.236567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.146 [2024-07-25 04:13:35.236585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.146 [2024-07-25 04:13:35.236599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.146 [2024-07-25 04:13:35.236622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.146 [2024-07-25 04:13:35.236636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.146 [2024-07-25 04:13:35.236650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.146 [2024-07-25 04:13:35.236663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.146 [2024-07-25 04:13:35.236677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.146 [2024-07-25 04:13:35.246542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.146 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.146 [2024-07-25 04:13:35.256604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.146 [2024-07-25 04:13:35.256902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.256933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.256950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.256974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.257010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.257029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.257045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.257067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 [2024-07-25 04:13:35.266679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.266952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.266980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.266997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.267020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.267041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.267055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.267069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.267094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 [2024-07-25 04:13:35.276750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.276973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.277000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.277017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.277039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.277071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.277089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.277104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.277123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.147 [2024-07-25 04:13:35.286837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.287119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.287148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.287165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.287188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.287209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.287224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.287237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.287266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 [2024-07-25 04:13:35.296907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.297114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.297143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.297160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.297195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.297233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.297261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.297289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.297309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 [2024-07-25 04:13:35.306981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.307165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.307193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.307211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.307233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.307262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.307277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.307301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.307321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.147 [2024-07-25 04:13:35.317049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 [2024-07-25 04:13:35.317270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 [2024-07-25 04:13:35.317299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.317317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.317339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.317372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.147 [2024-07-25 04:13:35.317390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.147 [2024-07-25 04:13:35.317404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.147 [2024-07-25 04:13:35.317424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:20.147 [2024-07-25 04:13:35.327118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:20.147 [2024-07-25 04:13:35.327315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.147 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.147 [2024-07-25 04:13:35.327345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.147 [2024-07-25 04:13:35.327364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.147 [2024-07-25 04:13:35.327387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.147 [2024-07-25 04:13:35.327408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.148 [2024-07-25 04:13:35.327423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.148 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:20.148 [2024-07-25 04:13:35.327437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.148 [2024-07-25 04:13:35.327459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.148 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:20.148 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.148 [2024-07-25 04:13:35.337190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.148 [2024-07-25 04:13:35.337395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.148 [2024-07-25 04:13:35.337424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.148 [2024-07-25 04:13:35.337441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.148 [2024-07-25 04:13:35.337464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.148 [2024-07-25 04:13:35.337497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.148 [2024-07-25 04:13:35.337515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.148 [2024-07-25 04:13:35.337530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.148 [2024-07-25 04:13:35.337559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.148 [2024-07-25 04:13:35.347263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.148 [2024-07-25 04:13:35.347454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.148 [2024-07-25 04:13:35.347488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.148 [2024-07-25 04:13:35.347505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.148 [2024-07-25 04:13:35.347528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.148 [2024-07-25 04:13:35.347549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.148 [2024-07-25 04:13:35.347563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.148 [2024-07-25 04:13:35.347577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.148 [2024-07-25 04:13:35.347612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.148 [2024-07-25 04:13:35.357342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.148 [2024-07-25 04:13:35.357546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.148 [2024-07-25 04:13:35.357573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb6e0 with addr=10.0.0.2, port=4420 00:30:20.148 [2024-07-25 04:13:35.357590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb6e0 is same with the state(5) to be set 00:30:20.148 [2024-07-25 04:13:35.357612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb6e0 (9): Bad file descriptor 00:30:20.148 [2024-07-25 04:13:35.357646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:20.148 [2024-07-25 04:13:35.357665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:20.148 [2024-07-25 04:13:35.357679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:20.148 [2024-07-25 04:13:35.357699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.148 [2024-07-25 04:13:35.360183] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:20.148 [2024-07-25 04:13:35.360211] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:20.148 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:30:20.148 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:21.080 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:21.338 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.339 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.711 [2024-07-25 04:13:37.664982] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:22.711 [2024-07-25 04:13:37.665025] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:22.711 [2024-07-25 04:13:37.665062] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.711 [2024-07-25 04:13:37.793480] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:22.711 [2024-07-25 04:13:37.898856] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.711 [2024-07-25 04:13:37.898912] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.711 request: 00:30:22.711 { 00:30:22.711 "name": "nvme", 00:30:22.711 "trtype": "tcp", 00:30:22.711 "traddr": "10.0.0.2", 00:30:22.711 "adrfam": "ipv4", 00:30:22.711 "trsvcid": "8009", 00:30:22.711 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.711 "wait_for_attach": true, 00:30:22.711 "method": "bdev_nvme_start_discovery", 00:30:22.711 "req_id": 1 00:30:22.711 } 00:30:22.711 Got JSON-RPC error response 00:30:22.711 response: 00:30:22.711 { 00:30:22.711 "code": -17, 00:30:22.711 "message": "File exists" 00:30:22.711 } 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.711 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.712 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.712 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.970 request: 00:30:22.970 { 00:30:22.970 "name": "nvme_second", 00:30:22.970 "trtype": "tcp", 00:30:22.970 "traddr": "10.0.0.2", 00:30:22.970 "adrfam": "ipv4", 00:30:22.970 "trsvcid": "8009", 00:30:22.970 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.970 "wait_for_attach": true, 00:30:22.970 "method": "bdev_nvme_start_discovery", 00:30:22.970 "req_id": 1 00:30:22.970 } 00:30:22.970 Got JSON-RPC error response 00:30:22.970 response: 00:30:22.970 { 00:30:22.970 "code": -17, 00:30:22.970 "message": "File exists" 00:30:22.970 } 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.970 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.901 [2024-07-25 04:13:39.110403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-07-25 04:13:39.110466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ede60 with addr=10.0.0.2, port=8010 00:30:23.901 [2024-07-25 04:13:39.110500] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:23.901 [2024-07-25 04:13:39.110516] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:23.901 [2024-07-25 04:13:39.110539] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:24.832 [2024-07-25 04:13:40.112966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.832 [2024-07-25 04:13:40.113042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ede60 with addr=10.0.0.2, port=8010 00:30:24.832 [2024-07-25 04:13:40.113075] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:24.832 [2024-07-25 04:13:40.113092] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:24.832 [2024-07-25 04:13:40.113116] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:26.203 [2024-07-25 04:13:41.115006] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:26.203 request: 00:30:26.203 { 00:30:26.203 "name": "nvme_second", 00:30:26.203 "trtype": "tcp", 00:30:26.203 "traddr": "10.0.0.2", 00:30:26.203 "adrfam": "ipv4", 00:30:26.203 "trsvcid": "8010", 00:30:26.203 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:26.203 "wait_for_attach": false, 00:30:26.203 "attach_timeout_ms": 3000, 00:30:26.203 "method": "bdev_nvme_start_discovery", 00:30:26.203 "req_id": 1 00:30:26.203 } 00:30:26.203 Got JSON-RPC error response 00:30:26.203 response: 00:30:26.203 { 00:30:26.203 "code": -110, 00:30:26.203 "message": "Connection timed out" 00:30:26.203 } 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 947445 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.203 rmmod nvme_tcp 00:30:26.203 rmmod nvme_fabrics 00:30:26.203 rmmod nvme_keyring 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 947415 ']' 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 947415 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 947415 ']' 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 947415 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 947415 00:30:26.203 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 947415' 00:30:26.204 killing process with pid 947415 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 947415 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 947415 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.204 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.765 00:30:28.765 real 0m14.208s 00:30:28.765 user 0m21.020s 00:30:28.765 sys 0m2.973s 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.765 ************************************ 00:30:28.765 END TEST nvmf_host_discovery 00:30:28.765 ************************************ 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.765 ************************************ 00:30:28.765 START TEST nvmf_host_multipath_status 00:30:28.765 ************************************ 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:28.765 * Looking for test storage... 00:30:28.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.765 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.766 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:30.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:30.666 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:30.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:30.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:30.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:30.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:30:30.667 00:30:30.667 --- 10.0.0.2 ping statistics --- 00:30:30.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.667 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:30.667 00:30:30.667 --- 10.0.0.1 ping statistics --- 00:30:30.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.667 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=950620 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 950620 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 950620 ']' 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:30.667 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:30.667 [2024-07-25 04:13:45.772598] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:30:30.667 [2024-07-25 04:13:45.772675] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.667 [2024-07-25 04:13:45.812305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:30.667 [2024-07-25 04:13:45.853287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:30.667 [2024-07-25 04:13:45.959458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.667 [2024-07-25 04:13:45.959521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.667 [2024-07-25 04:13:45.959574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.667 [2024-07-25 04:13:45.959598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.667 [2024-07-25 04:13:45.959618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.667 [2024-07-25 04:13:45.959725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.667 [2024-07-25 04:13:45.959749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=950620 00:30:30.926 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:31.184 [2024-07-25 04:13:46.405476] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.184 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:31.442 Malloc0 00:30:31.700 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:31.957 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.214 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.472 [2024-07-25 04:13:47.579164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.472 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:32.730 [2024-07-25 04:13:47.859913] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=950890 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 950890 /var/tmp/bdevperf.sock 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 950890 ']' 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.730 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:32.988 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:32.988 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:32.988 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:33.245 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:33.810 Nvme0n1 00:30:33.810 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:34.068 Nvme0n1 00:30:34.068 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:34.068 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:35.966 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:35.966 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:36.224 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:36.790 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:37.723 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:37.723 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:37.724 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.724 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:37.982 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.982 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:37.982 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.982 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:38.239 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.239 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:38.239 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.239 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:38.495 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.495 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:38.495 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.495 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:38.752 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.752 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:38.752 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.752 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:39.009 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.009 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:39.009 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.009 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:39.267 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.267 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:39.267 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:39.524 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:39.781 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:40.712 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:40.712 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:40.712 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.712 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:40.970 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:40.970 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:40.970 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.970 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:41.228 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.228 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:41.228 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.228 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:41.485 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.485 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:41.485 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.485 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:41.765 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.765 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:41.765 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.765 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:42.025 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.025 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:42.025 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.025 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:42.283 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.283 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:42.283 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:42.540 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:42.797 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:43.731 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:43.731 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:43.731 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.731 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:43.989 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.989 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:43.989 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.989 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:44.247 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.247 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:44.247 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.247 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:44.505 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.505 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:44.505 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.505 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:44.763 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.763 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:44.763 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.763 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:45.021 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.021 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:45.021 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.021 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:45.279 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.279 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:45.279 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:45.537 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:45.795 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:46.727 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:46.727 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:46.727 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.727 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:46.985 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.985 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:46.985 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.985 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.242 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.242 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.242 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.242 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.500 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.500 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.500 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.500 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:47.758 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.758 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:47.758 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.758 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:48.016 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.016 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:48.016 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.016 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.274 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:48.274 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:48.274 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:48.530 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:48.787 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:49.717 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:49.717 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:49.717 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.717 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.974 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:49.974 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:49.974 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.974 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.231 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.231 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.231 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.231 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.487 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.487 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.487 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.487 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.744 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.744 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:50.744 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.744 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:51.001 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:51.001 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:51.001 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.001 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.258 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:51.258 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:51.258 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:51.515 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:51.772 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:52.703 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:52.703 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:52.703 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.703 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.960 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.960 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.960 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.960 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.217 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.217 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.217 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.217 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.497 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.497 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.497 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.497 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.762 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.762 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:53.762 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.762 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:54.020 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:54.020 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:54.020 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.020 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.277 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.277 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:54.534 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:54.534 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:54.791 04:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:55.047 04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:55.977 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:55.977 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:55.977 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.977 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.234 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.234 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:56.234 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.234 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.492 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.492 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.492 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.492 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.749 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.749 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.749 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.749 04:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:57.006 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.006 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:57.006 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.006 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.263 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.263 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:57.264 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.264 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.520 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.521 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:57.521 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.776 04:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:58.033 04:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:58.965 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:58.965 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:58.965 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.965 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.222 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.222 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:59.223 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.223 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.480 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.480 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.480 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.480 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.737 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.737 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.737 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.737 04:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.994 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.994 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.994 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.994 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.252 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.252 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:00.252 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.252 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.509 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.509 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:00.509 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.766 04:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:01.022 04:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:01.954 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:01.954 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:01.954 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.954 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.211 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.211 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:02.211 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.212 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.469 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.469 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.469 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.469 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.727 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.727 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.727 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.727 04:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.985 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.985 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.985 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.985 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:03.243 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.243 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:03.243 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.243 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.500 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.500 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:03.500 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.758 04:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:04.017 04:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:04.950 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:04.950 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.950 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.950 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.208 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.208 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:05.208 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.208 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.466 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.466 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.466 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.466 04:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.729 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.729 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.729 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.729 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.024 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.024 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.024 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.024 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.280 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.280 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:06.280 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.280 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 950890 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 950890 ']' 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 950890 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950890 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950890' 00:31:06.537 killing process with pid 950890 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 950890 00:31:06.537 04:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 950890 00:31:06.796 Connection closed with partial response: 00:31:06.796 00:31:06.796 00:31:06.796 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 950890 00:31:06.796 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:06.796 [2024-07-25 04:13:47.923362] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:31:06.796 [2024-07-25 04:13:47.923458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950890 ] 00:31:06.796 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.796 [2024-07-25 04:13:47.956510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:06.796 [2024-07-25 04:13:47.985687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.796 [2024-07-25 04:13:48.075392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.796 Running I/O for 90 seconds... 00:31:06.796 [2024-07-25 04:14:03.645402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.796 [2024-07-25 04:14:03.645459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.645960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.645976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.796 [2024-07-25 04:14:03.646727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:06.796 [2024-07-25 04:14:03.646751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.646976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.646999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.647979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.647995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:06.797 [2024-07-25 04:14:03.648961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.797 [2024-07-25 04:14:03.648976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.649983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.649999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.798 [2024-07-25 04:14:03.650133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.798 [2024-07-25 04:14:03.650177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.650968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.650995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:06.798 [2024-07-25 04:14:03.651516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.798 [2024-07-25 04:14:03.651547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:03.651575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:03.651592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.204287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.204358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.204423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.204445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.205968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.205990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.206896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.206912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.207932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.207957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.207985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.799 [2024-07-25 04:14:19.208207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:06.799 [2024-07-25 04:14:19.208230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.800 [2024-07-25 04:14:19.208414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.800 [2024-07-25 04:14:19.208453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.800 [2024-07-25 04:14:19.208492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.800 [2024-07-25 04:14:19.208531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:06.800 [2024-07-25 04:14:19.208946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.800 [2024-07-25 04:14:19.208963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:06.800 Received shutdown signal, test time was about 32.373324 seconds 00:31:06.800 00:31:06.800 Latency(us) 00:31:06.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.800 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:06.800 Verification LBA range: start 0x0 length 0x4000 00:31:06.800 Nvme0n1 : 32.37 7964.92 31.11 0.00 0.00 16043.49 837.40 4026531.84 00:31:06.800 =================================================================================================================== 00:31:06.800 Total : 7964.92 31.11 0.00 0.00 16043.49 837.40 4026531.84 00:31:06.800 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:07.056 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:07.056 rmmod nvme_tcp 00:31:07.057 rmmod nvme_fabrics 00:31:07.057 rmmod nvme_keyring 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 950620 ']' 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 950620 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 950620 ']' 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 950620 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.057 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950620 00:31:07.314 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:07.314 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:07.314 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950620' 00:31:07.314 killing process with pid 950620 00:31:07.314 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 950620 00:31:07.314 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 950620 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.572 04:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.470 00:31:09.470 real 0m41.093s 00:31:09.470 user 2m4.155s 00:31:09.470 sys 0m10.363s 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:09.470 ************************************ 00:31:09.470 END TEST nvmf_host_multipath_status 00:31:09.470 ************************************ 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.470 ************************************ 00:31:09.470 START TEST nvmf_discovery_remove_ifc 00:31:09.470 ************************************ 00:31:09.470 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:09.729 * Looking for test storage... 00:31:09.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.729 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:11.629 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:11.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:11.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:11.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:11.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:11.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:31:11.630 00:31:11.630 --- 10.0.0.2 ping statistics --- 00:31:11.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.630 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:31:11.630 00:31:11.630 --- 10.0.0.1 ping statistics --- 00:31:11.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.630 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:11.630 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=957077 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 957077 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 957077 ']' 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:11.631 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.889 [2024-07-25 04:14:26.962871] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:31:11.889 [2024-07-25 04:14:26.962954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.889 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.889 [2024-07-25 04:14:26.998629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:11.889 [2024-07-25 04:14:27.028569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.889 [2024-07-25 04:14:27.118002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.889 [2024-07-25 04:14:27.118062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.889 [2024-07-25 04:14:27.118078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.889 [2024-07-25 04:14:27.118092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.889 [2024-07-25 04:14:27.118103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.889 [2024-07-25 04:14:27.118135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.146 [2024-07-25 04:14:27.269728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.146 [2024-07-25 04:14:27.278004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:12.146 null0 00:31:12.146 [2024-07-25 04:14:27.309874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=957096 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 957096 /tmp/host.sock 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 957096 ']' 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:12.146 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.146 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.146 [2024-07-25 04:14:27.373949] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:31:12.146 [2024-07-25 04:14:27.374026] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957096 ] 00:31:12.146 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.146 [2024-07-25 04:14:27.407644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:12.146 [2024-07-25 04:14:27.438326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.404 [2024-07-25 04:14:27.531128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.404 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.662 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.662 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:12.662 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.662 04:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.591 [2024-07-25 04:14:28.737410] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:13.591 [2024-07-25 04:14:28.737450] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:13.591 [2024-07-25 04:14:28.737474] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.591 [2024-07-25 04:14:28.824760] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:13.847 [2024-07-25 04:14:28.929452] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:13.847 [2024-07-25 04:14:28.929509] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:13.847 [2024-07-25 04:14:28.929562] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:13.848 [2024-07-25 04:14:28.929585] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.848 [2024-07-25 04:14:28.929635] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.848 [2024-07-25 04:14:28.935904] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7ce370 was disconnected and freed. delete nvme_qpair. 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:13.848 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:13.848 04:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.777 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.778 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.034 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.034 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:15.034 04:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:15.965 04:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:16.897 04:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.266 04:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:19.197 04:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.197 [2024-07-25 04:14:34.371030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:19.197 [2024-07-25 04:14:34.371112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.197 [2024-07-25 04:14:34.371145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.197 [2024-07-25 04:14:34.371163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.197 [2024-07-25 04:14:34.371177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.197 [2024-07-25 04:14:34.371190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.197 [2024-07-25 04:14:34.371203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.197 [2024-07-25 04:14:34.371216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.197 [2024-07-25 04:14:34.371251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.197 [2024-07-25 04:14:34.371268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.197 [2024-07-25 04:14:34.371290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.197 [2024-07-25 04:14:34.371304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x794d70 is same with the state(5) to be set 00:31:19.197 [2024-07-25 04:14:34.381050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x794d70 (9): Bad file descriptor 00:31:19.197 [2024-07-25 04:14:34.391096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.131 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.131 [2024-07-25 04:14:35.422283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:20.131 [2024-07-25 04:14:35.422352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x794d70 with addr=10.0.0.2, port=4420 00:31:20.131 [2024-07-25 04:14:35.422379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x794d70 is same with the state(5) to be set 00:31:20.131 [2024-07-25 04:14:35.422426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x794d70 (9): Bad file descriptor 00:31:20.131 [2024-07-25 04:14:35.422905] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:20.131 [2024-07-25 04:14:35.422954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:20.131 [2024-07-25 04:14:35.422974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:20.131 [2024-07-25 04:14:35.422993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:20.131 [2024-07-25 04:14:35.423027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.131 [2024-07-25 04:14:35.423047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.389 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.389 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:20.389 04:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:21.322 [2024-07-25 04:14:36.425560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:21.322 [2024-07-25 04:14:36.425623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:21.322 [2024-07-25 04:14:36.425637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:21.322 [2024-07-25 04:14:36.425665] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:21.322 [2024-07-25 04:14:36.425697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:21.322 [2024-07-25 04:14:36.425741] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:21.322 [2024-07-25 04:14:36.425805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.322 [2024-07-25 04:14:36.425832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.322 [2024-07-25 04:14:36.425852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.322 [2024-07-25 04:14:36.425865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.322 [2024-07-25 04:14:36.425879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.322 [2024-07-25 04:14:36.425892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.322 [2024-07-25 04:14:36.425906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.322 [2024-07-25 04:14:36.425919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.322 [2024-07-25 04:14:36.425933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.322 [2024-07-25 04:14:36.425946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.322 [2024-07-25 04:14:36.425959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:21.322 [2024-07-25 04:14:36.426113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x794210 (9): Bad file descriptor 00:31:21.322 [2024-07-25 04:14:36.427130] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:21.322 [2024-07-25 04:14:36.427152] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:21.322 04:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:22.696 04:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:23.261 [2024-07-25 04:14:38.443028] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:23.261 [2024-07-25 04:14:38.443057] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:23.261 [2024-07-25 04:14:38.443083] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.518 [2024-07-25 04:14:38.571513] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.518 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:23.519 04:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:23.519 [2024-07-25 04:14:38.756700] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:23.519 [2024-07-25 04:14:38.756752] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:23.519 [2024-07-25 04:14:38.756790] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:23.519 [2024-07-25 04:14:38.756816] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:23.519 [2024-07-25 04:14:38.756832] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:23.519 [2024-07-25 04:14:38.762035] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7d7900 was disconnected and freed. delete nvme_qpair. 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 957096 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 957096 ']' 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 957096 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 957096 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 957096' 00:31:24.452 killing process with pid 957096 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 957096 00:31:24.452 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 957096 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.710 04:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:24.710 rmmod nvme_tcp 00:31:24.710 rmmod nvme_fabrics 00:31:24.710 rmmod nvme_keyring 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 957077 ']' 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 957077 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 957077 ']' 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 957077 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 957077 00:31:24.967 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:24.968 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:24.968 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 957077' 00:31:24.968 killing process with pid 957077 00:31:24.968 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 957077 00:31:24.968 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 957077 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.226 04:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:27.127 00:31:27.127 real 0m17.605s 00:31:27.127 user 0m25.574s 00:31:27.127 sys 0m3.029s 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.127 ************************************ 00:31:27.127 END TEST nvmf_discovery_remove_ifc 00:31:27.127 ************************************ 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.127 ************************************ 00:31:27.127 START TEST nvmf_identify_kernel_target 00:31:27.127 ************************************ 00:31:27.127 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:27.127 * Looking for test storage... 00:31:27.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.385 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.386 04:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:29.286 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:29.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:29.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:29.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:29.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:29.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:31:29.287 00:31:29.287 --- 10.0.0.2 ping statistics --- 00:31:29.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.287 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:29.287 00:31:29.287 --- 10.0.0.1 ping statistics --- 00:31:29.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.287 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.287 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:29.288 04:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:30.673 Waiting for block devices as requested 00:31:30.673 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:30.673 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:30.673 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:30.673 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:30.931 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:30.931 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:30.931 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:30.931 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:31.189 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:31.189 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:31.189 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:31.189 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:31.445 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:31.445 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:31.446 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:31.446 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:31.703 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:31.704 No valid GPT data, bailing 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:31.704 04:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:31.962 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:31.962 00:31:31.962 Discovery Log Number of Records 2, Generation counter 2 00:31:31.962 =====Discovery Log Entry 0====== 00:31:31.962 trtype: tcp 00:31:31.962 adrfam: ipv4 00:31:31.962 subtype: current discovery subsystem 00:31:31.962 treq: not specified, sq flow control disable supported 00:31:31.962 portid: 1 00:31:31.962 trsvcid: 4420 00:31:31.962 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:31.962 traddr: 10.0.0.1 00:31:31.962 eflags: none 00:31:31.962 sectype: none 00:31:31.962 =====Discovery Log Entry 1====== 00:31:31.962 trtype: tcp 00:31:31.962 adrfam: ipv4 00:31:31.962 subtype: nvme subsystem 00:31:31.962 treq: not specified, sq flow control disable supported 00:31:31.962 portid: 1 00:31:31.962 trsvcid: 4420 00:31:31.962 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:31.962 traddr: 10.0.0.1 00:31:31.962 eflags: none 00:31:31.962 sectype: none 00:31:31.962 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:31.962 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:31.962 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.962 ===================================================== 00:31:31.962 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:31.962 ===================================================== 00:31:31.962 Controller Capabilities/Features 00:31:31.962 ================================ 00:31:31.962 Vendor ID: 0000 00:31:31.962 Subsystem Vendor ID: 0000 00:31:31.962 Serial Number: 8658e0f556321ca30515 00:31:31.962 Model Number: Linux 00:31:31.962 Firmware Version: 6.7.0-68 00:31:31.962 Recommended Arb Burst: 0 00:31:31.962 IEEE OUI Identifier: 00 00 00 00:31:31.962 Multi-path I/O 00:31:31.962 May have multiple subsystem ports: No 00:31:31.962 May have multiple controllers: No 00:31:31.962 Associated with SR-IOV VF: No 00:31:31.962 Max Data Transfer Size: Unlimited 00:31:31.962 Max Number of Namespaces: 0 00:31:31.962 Max Number of I/O Queues: 1024 00:31:31.962 NVMe Specification Version (VS): 1.3 00:31:31.962 NVMe Specification Version (Identify): 1.3 00:31:31.962 Maximum Queue Entries: 1024 00:31:31.962 Contiguous Queues Required: No 00:31:31.962 Arbitration Mechanisms Supported 00:31:31.962 Weighted Round Robin: Not Supported 00:31:31.962 Vendor Specific: Not Supported 00:31:31.962 Reset Timeout: 7500 ms 00:31:31.962 Doorbell Stride: 4 bytes 00:31:31.962 NVM Subsystem Reset: Not Supported 00:31:31.962 Command Sets Supported 00:31:31.962 NVM Command Set: Supported 00:31:31.962 Boot Partition: Not Supported 00:31:31.962 Memory Page Size Minimum: 4096 bytes 00:31:31.962 Memory Page Size Maximum: 4096 bytes 00:31:31.962 Persistent Memory Region: Not Supported 00:31:31.962 Optional Asynchronous Events Supported 00:31:31.962 Namespace Attribute Notices: Not Supported 00:31:31.962 Firmware Activation Notices: Not Supported 00:31:31.962 ANA Change Notices: Not Supported 00:31:31.962 PLE Aggregate Log Change Notices: Not Supported 00:31:31.962 LBA Status Info Alert Notices: Not Supported 00:31:31.962 EGE Aggregate Log Change Notices: Not Supported 00:31:31.962 Normal NVM Subsystem Shutdown event: Not Supported 00:31:31.962 Zone Descriptor Change Notices: Not Supported 00:31:31.962 Discovery Log Change Notices: Supported 00:31:31.962 Controller Attributes 00:31:31.962 128-bit Host Identifier: Not Supported 00:31:31.962 Non-Operational Permissive Mode: Not Supported 00:31:31.962 NVM Sets: Not Supported 00:31:31.962 Read Recovery Levels: Not Supported 00:31:31.962 Endurance Groups: Not Supported 00:31:31.962 Predictable Latency Mode: Not Supported 00:31:31.962 Traffic Based Keep ALive: Not Supported 00:31:31.962 Namespace Granularity: Not Supported 00:31:31.962 SQ Associations: Not Supported 00:31:31.962 UUID List: Not Supported 00:31:31.962 Multi-Domain Subsystem: Not Supported 00:31:31.962 Fixed Capacity Management: Not Supported 00:31:31.962 Variable Capacity Management: Not Supported 00:31:31.962 Delete Endurance Group: Not Supported 00:31:31.962 Delete NVM Set: Not Supported 00:31:31.962 Extended LBA Formats Supported: Not Supported 00:31:31.962 Flexible Data Placement Supported: Not Supported 00:31:31.962 00:31:31.962 Controller Memory Buffer Support 00:31:31.962 ================================ 00:31:31.962 Supported: No 00:31:31.962 00:31:31.962 Persistent Memory Region Support 00:31:31.962 ================================ 00:31:31.962 Supported: No 00:31:31.962 00:31:31.962 Admin Command Set Attributes 00:31:31.962 ============================ 00:31:31.962 Security Send/Receive: Not Supported 00:31:31.962 Format NVM: Not Supported 00:31:31.962 Firmware Activate/Download: Not Supported 00:31:31.962 Namespace Management: Not Supported 00:31:31.962 Device Self-Test: Not Supported 00:31:31.962 Directives: Not Supported 00:31:31.962 NVMe-MI: Not Supported 00:31:31.962 Virtualization Management: Not Supported 00:31:31.962 Doorbell Buffer Config: Not Supported 00:31:31.962 Get LBA Status Capability: Not Supported 00:31:31.962 Command & Feature Lockdown Capability: Not Supported 00:31:31.962 Abort Command Limit: 1 00:31:31.962 Async Event Request Limit: 1 00:31:31.962 Number of Firmware Slots: N/A 00:31:31.962 Firmware Slot 1 Read-Only: N/A 00:31:31.962 Firmware Activation Without Reset: N/A 00:31:31.962 Multiple Update Detection Support: N/A 00:31:31.962 Firmware Update Granularity: No Information Provided 00:31:31.962 Per-Namespace SMART Log: No 00:31:31.962 Asymmetric Namespace Access Log Page: Not Supported 00:31:31.962 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:31.962 Command Effects Log Page: Not Supported 00:31:31.962 Get Log Page Extended Data: Supported 00:31:31.962 Telemetry Log Pages: Not Supported 00:31:31.962 Persistent Event Log Pages: Not Supported 00:31:31.962 Supported Log Pages Log Page: May Support 00:31:31.962 Commands Supported & Effects Log Page: Not Supported 00:31:31.962 Feature Identifiers & Effects Log Page:May Support 00:31:31.962 NVMe-MI Commands & Effects Log Page: May Support 00:31:31.962 Data Area 4 for Telemetry Log: Not Supported 00:31:31.962 Error Log Page Entries Supported: 1 00:31:31.962 Keep Alive: Not Supported 00:31:31.963 00:31:31.963 NVM Command Set Attributes 00:31:31.963 ========================== 00:31:31.963 Submission Queue Entry Size 00:31:31.963 Max: 1 00:31:31.963 Min: 1 00:31:31.963 Completion Queue Entry Size 00:31:31.963 Max: 1 00:31:31.963 Min: 1 00:31:31.963 Number of Namespaces: 0 00:31:31.963 Compare Command: Not Supported 00:31:31.963 Write Uncorrectable Command: Not Supported 00:31:31.963 Dataset Management Command: Not Supported 00:31:31.963 Write Zeroes Command: Not Supported 00:31:31.963 Set Features Save Field: Not Supported 00:31:31.963 Reservations: Not Supported 00:31:31.963 Timestamp: Not Supported 00:31:31.963 Copy: Not Supported 00:31:31.963 Volatile Write Cache: Not Present 00:31:31.963 Atomic Write Unit (Normal): 1 00:31:31.963 Atomic Write Unit (PFail): 1 00:31:31.963 Atomic Compare & Write Unit: 1 00:31:31.963 Fused Compare & Write: Not Supported 00:31:31.963 Scatter-Gather List 00:31:31.963 SGL Command Set: Supported 00:31:31.963 SGL Keyed: Not Supported 00:31:31.963 SGL Bit Bucket Descriptor: Not Supported 00:31:31.963 SGL Metadata Pointer: Not Supported 00:31:31.963 Oversized SGL: Not Supported 00:31:31.963 SGL Metadata Address: Not Supported 00:31:31.963 SGL Offset: Supported 00:31:31.963 Transport SGL Data Block: Not Supported 00:31:31.963 Replay Protected Memory Block: Not Supported 00:31:31.963 00:31:31.963 Firmware Slot Information 00:31:31.963 ========================= 00:31:31.963 Active slot: 0 00:31:31.963 00:31:31.963 00:31:31.963 Error Log 00:31:31.963 ========= 00:31:31.963 00:31:31.963 Active Namespaces 00:31:31.963 ================= 00:31:31.963 Discovery Log Page 00:31:31.963 ================== 00:31:31.963 Generation Counter: 2 00:31:31.963 Number of Records: 2 00:31:31.963 Record Format: 0 00:31:31.963 00:31:31.963 Discovery Log Entry 0 00:31:31.963 ---------------------- 00:31:31.963 Transport Type: 3 (TCP) 00:31:31.963 Address Family: 1 (IPv4) 00:31:31.963 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:31.963 Entry Flags: 00:31:31.963 Duplicate Returned Information: 0 00:31:31.963 Explicit Persistent Connection Support for Discovery: 0 00:31:31.963 Transport Requirements: 00:31:31.963 Secure Channel: Not Specified 00:31:31.963 Port ID: 1 (0x0001) 00:31:31.963 Controller ID: 65535 (0xffff) 00:31:31.963 Admin Max SQ Size: 32 00:31:31.963 Transport Service Identifier: 4420 00:31:31.963 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:31.963 Transport Address: 10.0.0.1 00:31:31.963 Discovery Log Entry 1 00:31:31.963 ---------------------- 00:31:31.963 Transport Type: 3 (TCP) 00:31:31.963 Address Family: 1 (IPv4) 00:31:31.963 Subsystem Type: 2 (NVM Subsystem) 00:31:31.963 Entry Flags: 00:31:31.963 Duplicate Returned Information: 0 00:31:31.963 Explicit Persistent Connection Support for Discovery: 0 00:31:31.963 Transport Requirements: 00:31:31.963 Secure Channel: Not Specified 00:31:31.963 Port ID: 1 (0x0001) 00:31:31.963 Controller ID: 65535 (0xffff) 00:31:31.963 Admin Max SQ Size: 32 00:31:31.963 Transport Service Identifier: 4420 00:31:31.963 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:31.963 Transport Address: 10.0.0.1 00:31:31.963 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:31.963 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.222 get_feature(0x01) failed 00:31:32.222 get_feature(0x02) failed 00:31:32.222 get_feature(0x04) failed 00:31:32.222 ===================================================== 00:31:32.222 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:32.222 ===================================================== 00:31:32.222 Controller Capabilities/Features 00:31:32.222 ================================ 00:31:32.222 Vendor ID: 0000 00:31:32.222 Subsystem Vendor ID: 0000 00:31:32.222 Serial Number: a3bedb6ecb526782a13a 00:31:32.222 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:32.222 Firmware Version: 6.7.0-68 00:31:32.222 Recommended Arb Burst: 6 00:31:32.222 IEEE OUI Identifier: 00 00 00 00:31:32.222 Multi-path I/O 00:31:32.222 May have multiple subsystem ports: Yes 00:31:32.222 May have multiple controllers: Yes 00:31:32.222 Associated with SR-IOV VF: No 00:31:32.222 Max Data Transfer Size: Unlimited 00:31:32.222 Max Number of Namespaces: 1024 00:31:32.222 Max Number of I/O Queues: 128 00:31:32.222 NVMe Specification Version (VS): 1.3 00:31:32.222 NVMe Specification Version (Identify): 1.3 00:31:32.222 Maximum Queue Entries: 1024 00:31:32.222 Contiguous Queues Required: No 00:31:32.222 Arbitration Mechanisms Supported 00:31:32.222 Weighted Round Robin: Not Supported 00:31:32.222 Vendor Specific: Not Supported 00:31:32.222 Reset Timeout: 7500 ms 00:31:32.222 Doorbell Stride: 4 bytes 00:31:32.222 NVM Subsystem Reset: Not Supported 00:31:32.222 Command Sets Supported 00:31:32.222 NVM Command Set: Supported 00:31:32.222 Boot Partition: Not Supported 00:31:32.222 Memory Page Size Minimum: 4096 bytes 00:31:32.222 Memory Page Size Maximum: 4096 bytes 00:31:32.222 Persistent Memory Region: Not Supported 00:31:32.222 Optional Asynchronous Events Supported 00:31:32.222 Namespace Attribute Notices: Supported 00:31:32.222 Firmware Activation Notices: Not Supported 00:31:32.222 ANA Change Notices: Supported 00:31:32.222 PLE Aggregate Log Change Notices: Not Supported 00:31:32.222 LBA Status Info Alert Notices: Not Supported 00:31:32.222 EGE Aggregate Log Change Notices: Not Supported 00:31:32.222 Normal NVM Subsystem Shutdown event: Not Supported 00:31:32.222 Zone Descriptor Change Notices: Not Supported 00:31:32.222 Discovery Log Change Notices: Not Supported 00:31:32.222 Controller Attributes 00:31:32.222 128-bit Host Identifier: Supported 00:31:32.222 Non-Operational Permissive Mode: Not Supported 00:31:32.222 NVM Sets: Not Supported 00:31:32.222 Read Recovery Levels: Not Supported 00:31:32.222 Endurance Groups: Not Supported 00:31:32.222 Predictable Latency Mode: Not Supported 00:31:32.222 Traffic Based Keep ALive: Supported 00:31:32.222 Namespace Granularity: Not Supported 00:31:32.222 SQ Associations: Not Supported 00:31:32.222 UUID List: Not Supported 00:31:32.222 Multi-Domain Subsystem: Not Supported 00:31:32.222 Fixed Capacity Management: Not Supported 00:31:32.222 Variable Capacity Management: Not Supported 00:31:32.222 Delete Endurance Group: Not Supported 00:31:32.222 Delete NVM Set: Not Supported 00:31:32.222 Extended LBA Formats Supported: Not Supported 00:31:32.222 Flexible Data Placement Supported: Not Supported 00:31:32.222 00:31:32.222 Controller Memory Buffer Support 00:31:32.222 ================================ 00:31:32.222 Supported: No 00:31:32.222 00:31:32.222 Persistent Memory Region Support 00:31:32.222 ================================ 00:31:32.222 Supported: No 00:31:32.222 00:31:32.222 Admin Command Set Attributes 00:31:32.222 ============================ 00:31:32.222 Security Send/Receive: Not Supported 00:31:32.222 Format NVM: Not Supported 00:31:32.222 Firmware Activate/Download: Not Supported 00:31:32.222 Namespace Management: Not Supported 00:31:32.222 Device Self-Test: Not Supported 00:31:32.222 Directives: Not Supported 00:31:32.222 NVMe-MI: Not Supported 00:31:32.222 Virtualization Management: Not Supported 00:31:32.222 Doorbell Buffer Config: Not Supported 00:31:32.222 Get LBA Status Capability: Not Supported 00:31:32.222 Command & Feature Lockdown Capability: Not Supported 00:31:32.222 Abort Command Limit: 4 00:31:32.222 Async Event Request Limit: 4 00:31:32.222 Number of Firmware Slots: N/A 00:31:32.222 Firmware Slot 1 Read-Only: N/A 00:31:32.222 Firmware Activation Without Reset: N/A 00:31:32.222 Multiple Update Detection Support: N/A 00:31:32.222 Firmware Update Granularity: No Information Provided 00:31:32.222 Per-Namespace SMART Log: Yes 00:31:32.222 Asymmetric Namespace Access Log Page: Supported 00:31:32.222 ANA Transition Time : 10 sec 00:31:32.222 00:31:32.222 Asymmetric Namespace Access Capabilities 00:31:32.222 ANA Optimized State : Supported 00:31:32.222 ANA Non-Optimized State : Supported 00:31:32.222 ANA Inaccessible State : Supported 00:31:32.222 ANA Persistent Loss State : Supported 00:31:32.222 ANA Change State : Supported 00:31:32.222 ANAGRPID is not changed : No 00:31:32.222 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:32.222 00:31:32.222 ANA Group Identifier Maximum : 128 00:31:32.222 Number of ANA Group Identifiers : 128 00:31:32.222 Max Number of Allowed Namespaces : 1024 00:31:32.222 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:32.222 Command Effects Log Page: Supported 00:31:32.222 Get Log Page Extended Data: Supported 00:31:32.222 Telemetry Log Pages: Not Supported 00:31:32.222 Persistent Event Log Pages: Not Supported 00:31:32.222 Supported Log Pages Log Page: May Support 00:31:32.222 Commands Supported & Effects Log Page: Not Supported 00:31:32.222 Feature Identifiers & Effects Log Page:May Support 00:31:32.222 NVMe-MI Commands & Effects Log Page: May Support 00:31:32.222 Data Area 4 for Telemetry Log: Not Supported 00:31:32.222 Error Log Page Entries Supported: 128 00:31:32.222 Keep Alive: Supported 00:31:32.222 Keep Alive Granularity: 1000 ms 00:31:32.222 00:31:32.222 NVM Command Set Attributes 00:31:32.222 ========================== 00:31:32.222 Submission Queue Entry Size 00:31:32.222 Max: 64 00:31:32.222 Min: 64 00:31:32.222 Completion Queue Entry Size 00:31:32.222 Max: 16 00:31:32.222 Min: 16 00:31:32.222 Number of Namespaces: 1024 00:31:32.222 Compare Command: Not Supported 00:31:32.222 Write Uncorrectable Command: Not Supported 00:31:32.222 Dataset Management Command: Supported 00:31:32.222 Write Zeroes Command: Supported 00:31:32.222 Set Features Save Field: Not Supported 00:31:32.222 Reservations: Not Supported 00:31:32.222 Timestamp: Not Supported 00:31:32.222 Copy: Not Supported 00:31:32.222 Volatile Write Cache: Present 00:31:32.222 Atomic Write Unit (Normal): 1 00:31:32.222 Atomic Write Unit (PFail): 1 00:31:32.222 Atomic Compare & Write Unit: 1 00:31:32.222 Fused Compare & Write: Not Supported 00:31:32.222 Scatter-Gather List 00:31:32.222 SGL Command Set: Supported 00:31:32.222 SGL Keyed: Not Supported 00:31:32.222 SGL Bit Bucket Descriptor: Not Supported 00:31:32.222 SGL Metadata Pointer: Not Supported 00:31:32.222 Oversized SGL: Not Supported 00:31:32.222 SGL Metadata Address: Not Supported 00:31:32.222 SGL Offset: Supported 00:31:32.222 Transport SGL Data Block: Not Supported 00:31:32.222 Replay Protected Memory Block: Not Supported 00:31:32.222 00:31:32.222 Firmware Slot Information 00:31:32.222 ========================= 00:31:32.222 Active slot: 0 00:31:32.222 00:31:32.222 Asymmetric Namespace Access 00:31:32.222 =========================== 00:31:32.222 Change Count : 0 00:31:32.222 Number of ANA Group Descriptors : 1 00:31:32.222 ANA Group Descriptor : 0 00:31:32.222 ANA Group ID : 1 00:31:32.222 Number of NSID Values : 1 00:31:32.222 Change Count : 0 00:31:32.222 ANA State : 1 00:31:32.223 Namespace Identifier : 1 00:31:32.223 00:31:32.223 Commands Supported and Effects 00:31:32.223 ============================== 00:31:32.223 Admin Commands 00:31:32.223 -------------- 00:31:32.223 Get Log Page (02h): Supported 00:31:32.223 Identify (06h): Supported 00:31:32.223 Abort (08h): Supported 00:31:32.223 Set Features (09h): Supported 00:31:32.223 Get Features (0Ah): Supported 00:31:32.223 Asynchronous Event Request (0Ch): Supported 00:31:32.223 Keep Alive (18h): Supported 00:31:32.223 I/O Commands 00:31:32.223 ------------ 00:31:32.223 Flush (00h): Supported 00:31:32.223 Write (01h): Supported LBA-Change 00:31:32.223 Read (02h): Supported 00:31:32.223 Write Zeroes (08h): Supported LBA-Change 00:31:32.223 Dataset Management (09h): Supported 00:31:32.223 00:31:32.223 Error Log 00:31:32.223 ========= 00:31:32.223 Entry: 0 00:31:32.223 Error Count: 0x3 00:31:32.223 Submission Queue Id: 0x0 00:31:32.223 Command Id: 0x5 00:31:32.223 Phase Bit: 0 00:31:32.223 Status Code: 0x2 00:31:32.223 Status Code Type: 0x0 00:31:32.223 Do Not Retry: 1 00:31:32.223 Error Location: 0x28 00:31:32.223 LBA: 0x0 00:31:32.223 Namespace: 0x0 00:31:32.223 Vendor Log Page: 0x0 00:31:32.223 ----------- 00:31:32.223 Entry: 1 00:31:32.223 Error Count: 0x2 00:31:32.223 Submission Queue Id: 0x0 00:31:32.223 Command Id: 0x5 00:31:32.223 Phase Bit: 0 00:31:32.223 Status Code: 0x2 00:31:32.223 Status Code Type: 0x0 00:31:32.223 Do Not Retry: 1 00:31:32.223 Error Location: 0x28 00:31:32.223 LBA: 0x0 00:31:32.223 Namespace: 0x0 00:31:32.223 Vendor Log Page: 0x0 00:31:32.223 ----------- 00:31:32.223 Entry: 2 00:31:32.223 Error Count: 0x1 00:31:32.223 Submission Queue Id: 0x0 00:31:32.223 Command Id: 0x4 00:31:32.223 Phase Bit: 0 00:31:32.223 Status Code: 0x2 00:31:32.223 Status Code Type: 0x0 00:31:32.223 Do Not Retry: 1 00:31:32.223 Error Location: 0x28 00:31:32.223 LBA: 0x0 00:31:32.223 Namespace: 0x0 00:31:32.223 Vendor Log Page: 0x0 00:31:32.223 00:31:32.223 Number of Queues 00:31:32.223 ================ 00:31:32.223 Number of I/O Submission Queues: 128 00:31:32.223 Number of I/O Completion Queues: 128 00:31:32.223 00:31:32.223 ZNS Specific Controller Data 00:31:32.223 ============================ 00:31:32.223 Zone Append Size Limit: 0 00:31:32.223 00:31:32.223 00:31:32.223 Active Namespaces 00:31:32.223 ================= 00:31:32.223 get_feature(0x05) failed 00:31:32.223 Namespace ID:1 00:31:32.223 Command Set Identifier: NVM (00h) 00:31:32.223 Deallocate: Supported 00:31:32.223 Deallocated/Unwritten Error: Not Supported 00:31:32.223 Deallocated Read Value: Unknown 00:31:32.223 Deallocate in Write Zeroes: Not Supported 00:31:32.223 Deallocated Guard Field: 0xFFFF 00:31:32.223 Flush: Supported 00:31:32.223 Reservation: Not Supported 00:31:32.223 Namespace Sharing Capabilities: Multiple Controllers 00:31:32.223 Size (in LBAs): 1953525168 (931GiB) 00:31:32.223 Capacity (in LBAs): 1953525168 (931GiB) 00:31:32.223 Utilization (in LBAs): 1953525168 (931GiB) 00:31:32.223 UUID: 9b37d40e-b817-4e27-ae24-9e46bd3e22f4 00:31:32.223 Thin Provisioning: Not Supported 00:31:32.223 Per-NS Atomic Units: Yes 00:31:32.223 Atomic Boundary Size (Normal): 0 00:31:32.223 Atomic Boundary Size (PFail): 0 00:31:32.223 Atomic Boundary Offset: 0 00:31:32.223 NGUID/EUI64 Never Reused: No 00:31:32.223 ANA group ID: 1 00:31:32.223 Namespace Write Protected: No 00:31:32.223 Number of LBA Formats: 1 00:31:32.223 Current LBA Format: LBA Format #00 00:31:32.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:32.223 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:32.223 rmmod nvme_tcp 00:31:32.223 rmmod nvme_fabrics 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.223 04:14:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:34.124 04:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:35.498 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:35.498 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:35.498 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:36.430 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:36.430 00:31:36.430 real 0m9.264s 00:31:36.430 user 0m1.983s 00:31:36.430 sys 0m3.215s 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.430 ************************************ 00:31:36.430 END TEST nvmf_identify_kernel_target 00:31:36.430 ************************************ 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.430 ************************************ 00:31:36.430 START TEST nvmf_auth_host 00:31:36.430 ************************************ 00:31:36.430 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:36.430 * Looking for test storage... 00:31:36.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.688 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.689 04:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:38.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:38.587 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:38.588 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:38.588 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:38.588 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:38.588 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:38.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:31:38.846 00:31:38.846 --- 10.0.0.2 ping statistics --- 00:31:38.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.846 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:31:38.846 00:31:38.846 --- 10.0.0.1 ping statistics --- 00:31:38.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.846 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=964284 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 964284 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 964284 ']' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.846 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4495fcd6199b0f7224007f2bc68a1d12 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.IqD 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4495fcd6199b0f7224007f2bc68a1d12 0 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4495fcd6199b0f7224007f2bc68a1d12 0 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4495fcd6199b0f7224007f2bc68a1d12 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.IqD 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.IqD 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.IqD 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9827cc70e90ac83f7df68827fa0bac29958dac7f9ab0fb0019d67063a182e785 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oQB 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9827cc70e90ac83f7df68827fa0bac29958dac7f9ab0fb0019d67063a182e785 3 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9827cc70e90ac83f7df68827fa0bac29958dac7f9ab0fb0019d67063a182e785 3 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.105 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9827cc70e90ac83f7df68827fa0bac29958dac7f9ab0fb0019d67063a182e785 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oQB 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oQB 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oQB 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f342b9805b620b0e29e91cdb4e9ec331e99b3ec1a34501cb 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.o1R 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f342b9805b620b0e29e91cdb4e9ec331e99b3ec1a34501cb 0 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f342b9805b620b0e29e91cdb4e9ec331e99b3ec1a34501cb 0 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f342b9805b620b0e29e91cdb4e9ec331e99b3ec1a34501cb 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:39.106 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.o1R 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.o1R 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.o1R 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3893ba752a78dde6858bbc667839710c95feed34a895670d 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DYa 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3893ba752a78dde6858bbc667839710c95feed34a895670d 2 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3893ba752a78dde6858bbc667839710c95feed34a895670d 2 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3893ba752a78dde6858bbc667839710c95feed34a895670d 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DYa 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DYa 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DYa 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0f48d76f0f3bb5a30487a3670ee1705 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oQ3 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0f48d76f0f3bb5a30487a3670ee1705 1 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0f48d76f0f3bb5a30487a3670ee1705 1 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0f48d76f0f3bb5a30487a3670ee1705 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oQ3 00:31:39.364 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oQ3 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oQ3 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21ffd182f71ef2a0192ba494a0cde802 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.80n 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21ffd182f71ef2a0192ba494a0cde802 1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21ffd182f71ef2a0192ba494a0cde802 1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21ffd182f71ef2a0192ba494a0cde802 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.80n 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.80n 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.80n 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=da7c8685f75cd49dca052dce19ab41e041061ea053bc9e1c 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eav 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key da7c8685f75cd49dca052dce19ab41e041061ea053bc9e1c 2 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 da7c8685f75cd49dca052dce19ab41e041061ea053bc9e1c 2 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=da7c8685f75cd49dca052dce19ab41e041061ea053bc9e1c 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eav 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eav 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eav 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=91c81868f8c979a884368857cf2c82d9 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.B49 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 91c81868f8c979a884368857cf2c82d9 0 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 91c81868f8c979a884368857cf2c82d9 0 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=91c81868f8c979a884368857cf2c82d9 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:39.365 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.B49 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.B49 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.B49 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=66495df7663d87ad692eaa53945e0ce538dc8816c21b42accdba2879cddd88bd 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bmU 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 66495df7663d87ad692eaa53945e0ce538dc8816c21b42accdba2879cddd88bd 3 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 66495df7663d87ad692eaa53945e0ce538dc8816c21b42accdba2879cddd88bd 3 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=66495df7663d87ad692eaa53945e0ce538dc8816c21b42accdba2879cddd88bd 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bmU 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bmU 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bmU 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 964284 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 964284 ']' 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:39.623 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IqD 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oQB ]] 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oQB 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.o1R 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DYa ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DYa 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oQ3 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.80n ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.80n 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eav 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.B49 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.B49 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bmU 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:39.882 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:40.816 Waiting for block devices as requested 00:31:41.074 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:41.074 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:41.333 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:41.333 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:41.333 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:41.591 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:41.591 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:41.591 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:41.591 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:41.849 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:41.849 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:41.849 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:41.849 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:42.107 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:42.107 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:42.107 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:42.107 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:42.673 No valid GPT data, bailing 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:42.673 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:42.674 00:31:42.674 Discovery Log Number of Records 2, Generation counter 2 00:31:42.674 =====Discovery Log Entry 0====== 00:31:42.674 trtype: tcp 00:31:42.674 adrfam: ipv4 00:31:42.674 subtype: current discovery subsystem 00:31:42.674 treq: not specified, sq flow control disable supported 00:31:42.674 portid: 1 00:31:42.674 trsvcid: 4420 00:31:42.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:42.674 traddr: 10.0.0.1 00:31:42.674 eflags: none 00:31:42.674 sectype: none 00:31:42.674 =====Discovery Log Entry 1====== 00:31:42.674 trtype: tcp 00:31:42.674 adrfam: ipv4 00:31:42.674 subtype: nvme subsystem 00:31:42.674 treq: not specified, sq flow control disable supported 00:31:42.674 portid: 1 00:31:42.674 trsvcid: 4420 00:31:42.674 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:42.674 traddr: 10.0.0.1 00:31:42.674 eflags: none 00:31:42.674 sectype: none 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.674 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.932 nvme0n1 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:42.932 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.933 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.191 nvme0n1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.191 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 nvme0n1 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 nvme0n1 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.449 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.709 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.710 nvme0n1 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.710 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.967 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 nvme0n1 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.226 nvme0n1 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.226 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.227 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.485 nvme0n1 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.485 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.743 nvme0n1 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.743 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.743 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.001 nvme0n1 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.001 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.259 nvme0n1 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.259 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.823 nvme0n1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.823 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.081 nvme0n1 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.081 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.340 nvme0n1 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.340 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.603 nvme0n1 00:31:46.603 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.603 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.603 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.603 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.603 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.860 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.118 nvme0n1 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.118 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.682 nvme0n1 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.682 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.683 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.246 nvme0n1 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.246 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.809 nvme0n1 00:31:48.809 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.810 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.810 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.810 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.810 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.810 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.069 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.070 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.634 nvme0n1 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.634 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.199 nvme0n1 00:31:50.199 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.199 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.200 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.133 nvme0n1 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:51.133 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.134 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.507 nvme0n1 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.507 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.508 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.074 nvme0n1 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.074 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.332 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.266 nvme0n1 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.266 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.267 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.200 nvme0n1 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.200 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.458 nvme0n1 00:31:55.458 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.459 nvme0n1 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.459 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.717 nvme0n1 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.717 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.717 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.717 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.717 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.717 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 nvme0n1 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.975 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.232 nvme0n1 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:56.232 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.233 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.490 nvme0n1 00:31:56.490 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.490 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.490 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.491 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.749 nvme0n1 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.749 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.749 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.007 nvme0n1 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.007 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.265 nvme0n1 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:57.265 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.266 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.524 nvme0n1 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.524 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.089 nvme0n1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.089 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.346 nvme0n1 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.346 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.603 nvme0n1 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.603 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.860 nvme0n1 00:31:58.860 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.860 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.860 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.860 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.860 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:31:59.117 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.118 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.375 nvme0n1 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.375 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.940 nvme0n1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.940 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.504 nvme0n1 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.504 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.796 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.361 nvme0n1 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.361 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.925 nvme0n1 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.925 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.490 nvme0n1 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.490 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.423 nvme0n1 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.423 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.424 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.798 nvme0n1 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.798 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.738 nvme0n1 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.738 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.672 nvme0n1 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.672 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.606 nvme0n1 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:07.606 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.607 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.865 nvme0n1 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.865 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.865 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.124 nvme0n1 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.124 nvme0n1 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.124 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.381 nvme0n1 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.381 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.638 nvme0n1 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.638 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.895 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.895 nvme0n1 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:08.895 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.152 nvme0n1 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.152 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.410 nvme0n1 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.410 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.667 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.668 nvme0n1 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.668 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.925 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.925 nvme0n1 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.925 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.183 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.441 nvme0n1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.441 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.699 nvme0n1 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.699 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.700 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 nvme0n1 00:32:11.265 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.266 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.524 nvme0n1 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:11.524 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.525 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.783 nvme0n1 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.783 04:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:11.783 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.784 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.349 nvme0n1 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.349 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.607 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.173 nvme0n1 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.173 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.174 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.740 nvme0n1 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.740 04:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.334 nvme0n1 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.334 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.335 04:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.901 nvme0n1 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.901 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ5NWZjZDYxOTliMGY3MjI0MDA3ZjJiYzY4YTFkMTKTmxoU: 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: ]] 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTgyN2NjNzBlOTBhYzgzZjdkZjY4ODI3ZmEwYmFjMjk5NThkYWM3ZjlhYjBmYjAwMTlkNjcwNjNhMTgyZTc4NQs8TFg=: 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.902 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.835 nvme0n1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.835 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.836 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 nvme0n1 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.770 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.027 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.027 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.027 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDBmNDhkNzZmMGYzYmI1YTMwNDg3YTM2NzBlZTE3MDXjI/Pn: 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFmZmQxODJmNzFlZjJhMDE5MmJhNDk0YTBjZGU4MDIg7+wN: 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.028 04:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.960 nvme0n1 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGE3Yzg2ODVmNzVjZDQ5ZGNhMDUyZGNlMTlhYjQxZTA0MTA2MWVhMDUzYmM5ZTFj3g61zw==: 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFjODE4NjhmOGM5NzlhODg0MzY4ODU3Y2YyYzgyZDnX4ViW: 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.960 04:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 nvme0n1 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0OTVkZjc2NjNkODdhZDY5MmVhYTUzOTQ1ZTBjZTUzOGRjODgxNmMyMWI0MmFjY2RiYTI4NzljZGRkODhiZLF2bJI=: 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.893 04:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.824 nvme0n1 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM0MmI5ODA1YjYyMGIwZTI5ZTkxY2RiNGU5ZWMzMzFlOTliM2VjMWEzNDUwMWNiDX0Mhg==: 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg5M2JhNzUyYTc4ZGRlNjg1OGJiYzY2NzgzOTcxMGM5NWZlZWQzNGE4OTU2NzBkFi3K8Q==: 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.824 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.081 request: 00:32:20.081 { 00:32:20.081 "name": "nvme0", 00:32:20.081 "trtype": "tcp", 00:32:20.081 "traddr": "10.0.0.1", 00:32:20.081 "adrfam": "ipv4", 00:32:20.081 "trsvcid": "4420", 00:32:20.081 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:20.081 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:20.081 "prchk_reftag": false, 00:32:20.081 "prchk_guard": false, 00:32:20.081 "hdgst": false, 00:32:20.081 "ddgst": false, 00:32:20.081 "method": "bdev_nvme_attach_controller", 00:32:20.081 "req_id": 1 00:32:20.081 } 00:32:20.081 Got JSON-RPC error response 00:32:20.081 response: 00:32:20.081 { 00:32:20.081 "code": -5, 00:32:20.081 "message": "Input/output error" 00:32:20.081 } 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.081 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.081 request: 00:32:20.081 { 00:32:20.081 "name": "nvme0", 00:32:20.081 "trtype": "tcp", 00:32:20.081 "traddr": "10.0.0.1", 00:32:20.081 "adrfam": "ipv4", 00:32:20.081 "trsvcid": "4420", 00:32:20.082 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:20.082 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:20.082 "prchk_reftag": false, 00:32:20.082 "prchk_guard": false, 00:32:20.082 "hdgst": false, 00:32:20.082 "ddgst": false, 00:32:20.082 "dhchap_key": "key2", 00:32:20.082 "method": "bdev_nvme_attach_controller", 00:32:20.082 "req_id": 1 00:32:20.082 } 00:32:20.082 Got JSON-RPC error response 00:32:20.082 response: 00:32:20.082 { 00:32:20.082 "code": -5, 00:32:20.082 "message": "Input/output error" 00:32:20.082 } 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.082 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.340 request: 00:32:20.340 { 00:32:20.340 "name": "nvme0", 00:32:20.340 "trtype": "tcp", 00:32:20.340 "traddr": "10.0.0.1", 00:32:20.340 "adrfam": "ipv4", 00:32:20.340 "trsvcid": "4420", 00:32:20.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:20.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:20.340 "prchk_reftag": false, 00:32:20.340 "prchk_guard": false, 00:32:20.340 "hdgst": false, 00:32:20.340 "ddgst": false, 00:32:20.340 "dhchap_key": "key1", 00:32:20.340 "dhchap_ctrlr_key": "ckey2", 00:32:20.340 "method": "bdev_nvme_attach_controller", 00:32:20.340 "req_id": 1 00:32:20.340 } 00:32:20.340 Got JSON-RPC error response 00:32:20.340 response: 00:32:20.340 { 00:32:20.340 "code": -5, 00:32:20.340 "message": "Input/output error" 00:32:20.340 } 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:20.340 rmmod nvme_tcp 00:32:20.340 rmmod nvme_fabrics 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 964284 ']' 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 964284 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 964284 ']' 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 964284 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964284 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964284' 00:32:20.340 killing process with pid 964284 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 964284 00:32:20.340 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 964284 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.597 04:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:22.495 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:22.752 04:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:23.686 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:23.686 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:23.944 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:23.944 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:24.879 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:24.879 04:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.IqD /tmp/spdk.key-null.o1R /tmp/spdk.key-sha256.oQ3 /tmp/spdk.key-sha384.eav /tmp/spdk.key-sha512.bmU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:24.879 04:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:26.253 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:26.253 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:26.253 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:26.253 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:26.253 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:26.253 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:26.253 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:26.253 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:26.253 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:26.253 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:26.253 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:26.253 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:26.253 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:26.253 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:26.253 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:26.253 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:26.253 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:26.253 00:32:26.253 real 0m49.694s 00:32:26.253 user 0m47.389s 00:32:26.253 sys 0m5.735s 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.253 ************************************ 00:32:26.253 END TEST nvmf_auth_host 00:32:26.253 ************************************ 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.253 ************************************ 00:32:26.253 START TEST nvmf_digest 00:32:26.253 ************************************ 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:26.253 * Looking for test storage... 00:32:26.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.253 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.254 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:28.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:28.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:28.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:28.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.156 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:28.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:32:28.415 00:32:28.415 --- 10.0.0.2 ping statistics --- 00:32:28.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.415 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:28.415 00:32:28.415 --- 10.0.0.1 ping statistics --- 00:32:28.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.415 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:28.415 ************************************ 00:32:28.415 START TEST nvmf_digest_clean 00:32:28.415 ************************************ 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=974358 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 974358 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 974358 ']' 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.415 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.415 [2024-07-25 04:15:43.647357] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:28.415 [2024-07-25 04:15:43.647438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.415 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.415 [2024-07-25 04:15:43.683777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.673 [2024-07-25 04:15:43.715427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.673 [2024-07-25 04:15:43.808285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.673 [2024-07-25 04:15:43.808355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.673 [2024-07-25 04:15:43.808372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.673 [2024-07-25 04:15:43.808395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.673 [2024-07-25 04:15:43.808408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.673 [2024-07-25 04:15:43.808437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.673 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.931 null0 00:32:28.931 [2024-07-25 04:15:43.985379] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.931 [2024-07-25 04:15:44.009626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=974386 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 974386 /var/tmp/bperf.sock 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 974386 ']' 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.931 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.931 [2024-07-25 04:15:44.056768] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:28.931 [2024-07-25 04:15:44.056850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974386 ] 00:32:28.931 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.931 [2024-07-25 04:15:44.090036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.931 [2024-07-25 04:15:44.118364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.931 [2024-07-25 04:15:44.201039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.189 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:29.189 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:29.189 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:29.189 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:29.189 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:29.446 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.446 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.734 nvme0n1 00:32:29.734 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:29.734 04:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.991 Running I/O for 2 seconds... 00:32:31.889 00:32:31.889 Latency(us) 00:32:31.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.889 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:31.889 nvme0n1 : 2.00 19370.61 75.67 0.00 0.00 6599.55 3640.89 19126.80 00:32:31.889 =================================================================================================================== 00:32:31.889 Total : 19370.61 75.67 0.00 0.00 6599.55 3640.89 19126.80 00:32:31.889 0 00:32:31.889 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:31.889 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:31.889 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:31.889 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:31.889 | select(.opcode=="crc32c") 00:32:31.889 | "\(.module_name) \(.executed)"' 00:32:31.889 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 974386 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 974386 ']' 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 974386 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974386 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974386' 00:32:32.147 killing process with pid 974386 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 974386 00:32:32.147 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.147 00:32:32.147 Latency(us) 00:32:32.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.147 =================================================================================================================== 00:32:32.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.147 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 974386 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=974795 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 974795 /var/tmp/bperf.sock 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 974795 ']' 00:32:32.405 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.406 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.406 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.406 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.406 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:32.406 [2024-07-25 04:15:47.618101] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:32.406 [2024-07-25 04:15:47.618176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974795 ] 00:32:32.406 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:32.406 Zero copy mechanism will not be used. 00:32:32.406 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.406 [2024-07-25 04:15:47.648983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:32.406 [2024-07-25 04:15:47.680302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.664 [2024-07-25 04:15:47.768742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.664 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.664 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:32.664 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:32.664 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:32.664 04:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:32.922 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:32.922 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.488 nvme0n1 00:32:33.488 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:33.488 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.488 Zero copy mechanism will not be used. 00:32:33.488 Running I/O for 2 seconds... 00:32:36.018 00:32:36.018 Latency(us) 00:32:36.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.018 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:36.018 nvme0n1 : 2.00 3443.27 430.41 0.00 0.00 4641.98 1413.88 6699.24 00:32:36.018 =================================================================================================================== 00:32:36.018 Total : 3443.27 430.41 0.00 0.00 4641.98 1413.88 6699.24 00:32:36.018 0 00:32:36.018 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:36.018 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:36.018 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:36.018 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:36.018 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:36.018 | select(.opcode=="crc32c") 00:32:36.018 | "\(.module_name) \(.executed)"' 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 974795 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 974795 ']' 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 974795 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974795 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974795' 00:32:36.018 killing process with pid 974795 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 974795 00:32:36.018 Received shutdown signal, test time was about 2.000000 seconds 00:32:36.018 00:32:36.018 Latency(us) 00:32:36.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.018 =================================================================================================================== 00:32:36.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 974795 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=975200 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 975200 /var/tmp/bperf.sock 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 975200 ']' 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.018 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.018 [2024-07-25 04:15:51.273608] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:36.018 [2024-07-25 04:15:51.273685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975200 ] 00:32:36.018 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.018 [2024-07-25 04:15:51.303958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:36.276 [2024-07-25 04:15:51.332772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.276 [2024-07-25 04:15:51.420904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.276 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.276 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:36.276 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:36.276 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:36.276 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:36.535 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.535 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.101 nvme0n1 00:32:37.101 04:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:37.101 04:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.358 Running I/O for 2 seconds... 00:32:39.252 00:32:39.252 Latency(us) 00:32:39.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.252 nvme0n1 : 2.00 20190.67 78.87 0.00 0.00 6330.12 3349.62 16019.91 00:32:39.252 =================================================================================================================== 00:32:39.252 Total : 20190.67 78.87 0.00 0.00 6330.12 3349.62 16019.91 00:32:39.252 0 00:32:39.252 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:39.252 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:39.252 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:39.252 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:39.252 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:39.252 | select(.opcode=="crc32c") 00:32:39.252 | "\(.module_name) \(.executed)"' 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 975200 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 975200 ']' 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 975200 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 975200 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 975200' 00:32:39.510 killing process with pid 975200 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 975200 00:32:39.510 Received shutdown signal, test time was about 2.000000 seconds 00:32:39.510 00:32:39.510 Latency(us) 00:32:39.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.510 =================================================================================================================== 00:32:39.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.510 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 975200 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=975726 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 975726 /var/tmp/bperf.sock 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 975726 ']' 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:39.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.768 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 [2024-07-25 04:15:54.971621] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:39.768 [2024-07-25 04:15:54.971699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975726 ] 00:32:39.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:39.768 Zero copy mechanism will not be used. 00:32:39.768 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.768 [2024-07-25 04:15:55.002732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:39.768 [2024-07-25 04:15:55.030282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.025 [2024-07-25 04:15:55.115851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.025 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.025 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:40.025 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:40.025 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:40.025 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:40.283 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.283 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.847 nvme0n1 00:32:40.847 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:40.847 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:40.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:40.847 Zero copy mechanism will not be used. 00:32:40.847 Running I/O for 2 seconds... 00:32:42.739 00:32:42.739 Latency(us) 00:32:42.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.739 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:42.739 nvme0n1 : 2.00 3253.55 406.69 0.00 0.00 4907.18 3737.98 13592.65 00:32:42.739 =================================================================================================================== 00:32:42.739 Total : 3253.55 406.69 0.00 0.00 4907.18 3737.98 13592.65 00:32:42.739 0 00:32:42.739 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:42.739 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:42.739 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:42.739 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:42.739 | select(.opcode=="crc32c") 00:32:42.739 | "\(.module_name) \(.executed)"' 00:32:42.739 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 975726 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 975726 ']' 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 975726 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 975726 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 975726' 00:32:42.996 killing process with pid 975726 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 975726 00:32:42.996 Received shutdown signal, test time was about 2.000000 seconds 00:32:42.996 00:32:42.996 Latency(us) 00:32:42.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.996 =================================================================================================================== 00:32:42.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.996 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 975726 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 974358 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 974358 ']' 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 974358 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974358 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974358' 00:32:43.254 killing process with pid 974358 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 974358 00:32:43.254 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 974358 00:32:43.512 00:32:43.512 real 0m15.194s 00:32:43.512 user 0m30.245s 00:32:43.512 sys 0m4.126s 00:32:43.512 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:43.512 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:43.512 ************************************ 00:32:43.512 END TEST nvmf_digest_clean 00:32:43.512 ************************************ 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.770 ************************************ 00:32:43.770 START TEST nvmf_digest_error 00:32:43.770 ************************************ 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=976161 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 976161 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 976161 ']' 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:43.770 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:43.770 [2024-07-25 04:15:58.887789] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:43.770 [2024-07-25 04:15:58.887881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.770 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.770 [2024-07-25 04:15:58.926363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:43.770 [2024-07-25 04:15:58.954014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.770 [2024-07-25 04:15:59.041999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.770 [2024-07-25 04:15:59.042071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.770 [2024-07-25 04:15:59.042084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.770 [2024-07-25 04:15:59.042095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.770 [2024-07-25 04:15:59.042104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.770 [2024-07-25 04:15:59.042129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.027 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.027 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:44.027 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:44.027 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.028 [2024-07-25 04:15:59.122731] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.028 null0 00:32:44.028 [2024-07-25 04:15:59.234077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.028 [2024-07-25 04:15:59.258330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=976187 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 976187 /var/tmp/bperf.sock 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 976187 ']' 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:44.028 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.028 [2024-07-25 04:15:59.304435] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:44.028 [2024-07-25 04:15:59.304510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976187 ] 00:32:44.285 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.285 [2024-07-25 04:15:59.337850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.285 [2024-07-25 04:15:59.367142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.285 [2024-07-25 04:15:59.454968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.285 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.285 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:44.285 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:44.285 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:44.542 04:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.160 nvme0n1 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.160 04:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.160 Running I/O for 2 seconds... 00:32:45.160 [2024-07-25 04:16:00.256868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.160 [2024-07-25 04:16:00.256933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.160 [2024-07-25 04:16:00.256967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.160 [2024-07-25 04:16:00.272295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.160 [2024-07-25 04:16:00.272327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.160 [2024-07-25 04:16:00.272359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.160 [2024-07-25 04:16:00.289050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.160 [2024-07-25 04:16:00.289097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.160 [2024-07-25 04:16:00.289114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.160 [2024-07-25 04:16:00.305772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.160 [2024-07-25 04:16:00.305818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.160 [2024-07-25 04:16:00.305851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.160 [2024-07-25 04:16:00.319103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.319141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.319161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.333074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.333111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.333141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.346533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.346590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.360435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.360477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.360505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.373147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.373183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.373203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.387738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.387780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.387801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.401452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.401485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.401502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.161 [2024-07-25 04:16:00.421640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.161 [2024-07-25 04:16:00.421679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.161 [2024-07-25 04:16:00.421699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.434056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.434094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.434113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.450270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.450332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.450350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.467713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.467775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.467806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.480233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.480280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.480300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.496882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.496919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.496938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.514813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.514849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.514869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.527978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.528015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.528034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.543968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.544014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.544044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.561256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.561301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.561337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.576366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.576408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.576435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.589470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.589499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.589531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.606103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.606141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.622624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.622669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.622700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.635681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.635718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.635737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.652344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.652375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.652408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.665252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.665299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.665316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.681485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.681531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.419 [2024-07-25 04:16:00.681551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.419 [2024-07-25 04:16:00.695956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.419 [2024-07-25 04:16:00.696001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.420 [2024-07-25 04:16:00.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.420 [2024-07-25 04:16:00.709775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.420 [2024-07-25 04:16:00.709811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.420 [2024-07-25 04:16:00.709831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.723435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.723467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.723504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.740551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.740614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.755796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.755842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.755873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.769750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.769786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.786841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.786897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.800217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.800286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.800317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.814456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.814496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.814524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.679 [2024-07-25 04:16:00.830748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.679 [2024-07-25 04:16:00.830793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.679 [2024-07-25 04:16:00.830824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.842717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.842753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.842773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.859853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.859898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.859931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.872176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.872212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.872231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.888310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.888342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.888360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.901174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.901211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.901231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.914797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.914842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.914873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.930277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.930332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.930360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.943339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.943380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.943407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.956067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.956113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.956145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.680 [2024-07-25 04:16:00.969966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.680 [2024-07-25 04:16:00.970002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.680 [2024-07-25 04:16:00.970027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:00.983293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:00.983332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:00.983360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:00.998210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:00.998254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:00.998276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.011168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.011204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.011223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.027116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.027162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.027193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.041540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.041577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.041596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.053356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.053389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.053407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.069923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.069954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.069970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.938 [2024-07-25 04:16:01.086340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.938 [2024-07-25 04:16:01.086382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.938 [2024-07-25 04:16:01.086409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.099602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.099703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.118624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.118702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.130811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.130868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.147219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.147289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.164222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.164276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.164297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.181668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.181714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.181745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.193703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.193739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.193759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.211894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.211951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.939 [2024-07-25 04:16:01.224086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:45.939 [2024-07-25 04:16:01.224122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.939 [2024-07-25 04:16:01.224142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.237165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.237202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.237222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.253393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.253441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.253459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.268316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.268357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.268385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.282240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.282297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.282313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.299062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.299098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.299118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.196 [2024-07-25 04:16:01.312288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.196 [2024-07-25 04:16:01.312320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.196 [2024-07-25 04:16:01.312336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.328004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.328041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.328061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.340461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.340491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.340523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.359115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.359151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.359176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.374203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.374256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.374289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.387742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.387787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.387818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.400344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.400374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.400405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.413953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.413998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.414029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.427998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.428034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.428054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.440898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.440934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.440954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.455572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.455621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.455640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.469085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.469130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.469161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.197 [2024-07-25 04:16:01.481212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.197 [2024-07-25 04:16:01.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.197 [2024-07-25 04:16:01.481276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.495869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.495937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.509716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.509752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.509772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.522885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.522920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.522940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.537143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.537181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.537211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.549571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.549618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.549637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.564800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.564845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.564876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.578110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.578154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.578186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.591583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.591628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.591669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.604898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.604977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.618538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.618639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.633826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.633864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.633883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.647261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.647312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.661035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.661072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.661092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.674604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.674641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.674661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.687668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.687704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.687723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.700545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.700595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.700626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.715233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.715315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.731117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.731161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.731194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.455 [2024-07-25 04:16:01.743617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.455 [2024-07-25 04:16:01.743653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-25 04:16:01.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.758150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.758187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.758206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.770458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.770489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.787127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.787172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.787202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.799425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.799458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.799490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.812553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.827820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.827856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.827876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.839526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.839555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.839571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.855140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.855176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.855195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.871604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.871648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.871679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.884367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.884396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.884428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.900544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.900580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.900600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.917789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.917835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.713 [2024-07-25 04:16:01.917866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.713 [2024-07-25 04:16:01.930575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.713 [2024-07-25 04:16:01.930612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:01.930632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.714 [2024-07-25 04:16:01.947582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.714 [2024-07-25 04:16:01.947618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:01.947638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.714 [2024-07-25 04:16:01.959755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.714 [2024-07-25 04:16:01.959792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:01.959817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.714 [2024-07-25 04:16:01.975502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.714 [2024-07-25 04:16:01.975541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:01.975582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.714 [2024-07-25 04:16:01.992088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.714 [2024-07-25 04:16:01.992132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:01.992164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.714 [2024-07-25 04:16:02.005421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.714 [2024-07-25 04:16:02.005461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.714 [2024-07-25 04:16:02.005490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.018631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.018667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.018687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.032209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.032262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.032308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.044912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.044949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.044969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.058363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.058394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.058426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.073092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.073129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.073149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.086537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.086633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.100362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.100409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.100427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.117825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.117869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.117901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.130469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.130498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.130529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.145952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.145988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.158697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.158732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.158752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.174447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.174487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.191122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.191167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.191200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.203149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.203185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.203205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.220221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.220266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.971 [2024-07-25 04:16:02.220301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.971 [2024-07-25 04:16:02.234907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.971 [2024-07-25 04:16:02.234952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.972 [2024-07-25 04:16:02.234991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.972 [2024-07-25 04:16:02.245783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c9280) 00:32:46.972 [2024-07-25 04:16:02.245819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.972 [2024-07-25 04:16:02.245839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.229 00:32:47.229 Latency(us) 00:32:47.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.229 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:47.229 nvme0n1 : 2.05 17210.24 67.23 0.00 0.00 7281.26 3907.89 46020.84 00:32:47.229 =================================================================================================================== 00:32:47.229 Total : 17210.24 67.23 0.00 0.00 7281.26 3907.89 46020.84 00:32:47.229 0 00:32:47.229 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.229 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.229 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.229 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.229 | .driver_specific 00:32:47.229 | .nvme_error 00:32:47.229 | .status_code 00:32:47.229 | .command_transient_transport_error' 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 976187 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 976187 ']' 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 976187 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 976187 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 976187' 00:32:47.487 killing process with pid 976187 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 976187 00:32:47.487 Received shutdown signal, test time was about 2.000000 seconds 00:32:47.487 00:32:47.487 Latency(us) 00:32:47.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.487 =================================================================================================================== 00:32:47.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.487 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 976187 00:32:47.744 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:47.744 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:47.744 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:47.744 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=976675 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 976675 /var/tmp/bperf.sock 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 976675 ']' 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:47.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.745 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:47.745 [2024-07-25 04:16:02.855102] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:47.745 [2024-07-25 04:16:02.855181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976675 ] 00:32:47.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:47.745 Zero copy mechanism will not be used. 00:32:47.745 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.745 [2024-07-25 04:16:02.885704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:47.745 [2024-07-25 04:16:02.916668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.745 [2024-07-25 04:16:03.006461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.002 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.002 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:48.002 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.002 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.269 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:48.269 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.269 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.269 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.269 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.270 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.528 nvme0n1 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:48.528 04:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:48.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:48.786 Zero copy mechanism will not be used. 00:32:48.786 Running I/O for 2 seconds... 00:32:48.786 [2024-07-25 04:16:03.869191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.786 [2024-07-25 04:16:03.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.786 [2024-07-25 04:16:03.869292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.877598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.877632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.885678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.885709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.885727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.893933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.893964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.893981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.902578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.902608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.902625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.910767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.910797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.910814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.919185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.919215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.919257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.927481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.927527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.927543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.935829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.935859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.944064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.944093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.952393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.952437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.952455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.960609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.960638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.960656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.968984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.969013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.977265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.977309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.977333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.985692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.985735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.985753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:03.994075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:03.994102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:03.994134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.002579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.002607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.002638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.010811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.010854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.010872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.019178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.019221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.019237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.027526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.027569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.027586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.035744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.035804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.044102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.044145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.052276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.052305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.052337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.060455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.060483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.068776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.068805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.068837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:48.787 [2024-07-25 04:16:04.077213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:48.787 [2024-07-25 04:16:04.077248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.787 [2024-07-25 04:16:04.077266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.045 [2024-07-25 04:16:04.085487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.045 [2024-07-25 04:16:04.085532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.045 [2024-07-25 04:16:04.085548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.045 [2024-07-25 04:16:04.093820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.093848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.102193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.102221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.102260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.110427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.110471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.110487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.118619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.118647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.118689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.127119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.127147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.127164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.135469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.135498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.135529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.143700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.143743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.151963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.151992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.152008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.160314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.160344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.160361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.168618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.168646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.168662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.176860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.176889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.176905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.185253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.185282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.185299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.193743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.193797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.193814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.202067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.202112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.202129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.210237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.210274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.210290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.218477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.218507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.218539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.226825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.226869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.226885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.235103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.235132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.235149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.243667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.243710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.243726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.252182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.252210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.260545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.260572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.260587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.269166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.269193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.269209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.277593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.277621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.277636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.285991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.286018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.286034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.294445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.294473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.294488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.302856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.302898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.302915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.311088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.311130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.311145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.319304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.319332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.319347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.327695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.327721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.327737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.046 [2024-07-25 04:16:04.336090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.046 [2024-07-25 04:16:04.336118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.046 [2024-07-25 04:16:04.336154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.304 [2024-07-25 04:16:04.344782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.304 [2024-07-25 04:16:04.344811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.304 [2024-07-25 04:16:04.344828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.304 [2024-07-25 04:16:04.353325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.304 [2024-07-25 04:16:04.353368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.304 [2024-07-25 04:16:04.353384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.304 [2024-07-25 04:16:04.361698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.304 [2024-07-25 04:16:04.361725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.304 [2024-07-25 04:16:04.361740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.304 [2024-07-25 04:16:04.370035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.304 [2024-07-25 04:16:04.370078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.304 [2024-07-25 04:16:04.370095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.304 [2024-07-25 04:16:04.378476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.378505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.378520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.386838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.386881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.395194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.395221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.395237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.403628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.403670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.412136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.420663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.420706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.420722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.429135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.429161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.429177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.437576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.437603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.437619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.446126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.446153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.446169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.454532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.454559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.454574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.462866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.462893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.462909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.471281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.471322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.471337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.479917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.479945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.479966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.488744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.488789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.488806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.497444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.497472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.497489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.506086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.506128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.506144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.514641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.514668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.514684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.523268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.523295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.523311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.531651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.531678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.531695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.540465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.540495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.549076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.549106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.549123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.557421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.557469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.557487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.565847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.565890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.565907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.574166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.574193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.582801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.582828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.582844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.591207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.591234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.305 [2024-07-25 04:16:04.591257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.305 [2024-07-25 04:16:04.599511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.305 [2024-07-25 04:16:04.599554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.306 [2024-07-25 04:16:04.599571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.607992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.608019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.608035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.616299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.616326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.616342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.624557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.624588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.624619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.632685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.632713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.632729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.640977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.641006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.641022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.649164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.649191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.649208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.657424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.563 [2024-07-25 04:16:04.657465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.563 [2024-07-25 04:16:04.657481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.563 [2024-07-25 04:16:04.665808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.665851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.674564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.674622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.683156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.683184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.683200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.691997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.692025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.692041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.700464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.700494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.700520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.708836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.708879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.717500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.717528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.717544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.727673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.737504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.737532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.737548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.747875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.747904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.747921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.758339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.758383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.768687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.768716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.768732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.779159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.779203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.779219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.789515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.789560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.789577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.800018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.800047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.809675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.809704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.809720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.819378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.819409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.828651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.828698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.838233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.838269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.838286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.847034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.847062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.564 [2024-07-25 04:16:04.855984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.564 [2024-07-25 04:16:04.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.564 [2024-07-25 04:16:04.856028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.866081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.866110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.866132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.875593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.875636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.885405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.885435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.885452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.894502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.894547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.894563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.903113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.903142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.903158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.912701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.912731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.912748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.922502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.922532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.922548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.931734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.931763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.931779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.941163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.941192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.941208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.950511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.950544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.950561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.959432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.959461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.959477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.968707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.968736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.968752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.978132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.978161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.978177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.987113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.987159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:04.996378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:04.996409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:04.996426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:05.005555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:05.005586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:05.005602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:05.015034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:05.015080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:05.015096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.822 [2024-07-25 04:16:05.024103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.822 [2024-07-25 04:16:05.024148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.822 [2024-07-25 04:16:05.024164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.032996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.033026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.033042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.041958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.042003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.042020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.051134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.051164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.051180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.059760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.059790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.069142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.069171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.069188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.078477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.078522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.087554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.087583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.087599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.096097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.096127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.096144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.105068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.105099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.105121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.823 [2024-07-25 04:16:05.114476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:49.823 [2024-07-25 04:16:05.114506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.823 [2024-07-25 04:16:05.114523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.123804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.123836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.123852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.133360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.133390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.133407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.141632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.141662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.141678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.151221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.151275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.160230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.160286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.160309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.169424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.169469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.178401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.178430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.178446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.187194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.187239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.187265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.194983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.195013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.195030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.204379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.204410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.204428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.213339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.213385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.222258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.222288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.222305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.231171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.231200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.231217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.239553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.239585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.239602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.247980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.248028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.256866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.256913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.256935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.265598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.265646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.274515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.274554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.274584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.283413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.283441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.283460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.292565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.292593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.292609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.301558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.301587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.081 [2024-07-25 04:16:05.310323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.081 [2024-07-25 04:16:05.310353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.081 [2024-07-25 04:16:05.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.319195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.319225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.319247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.328228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.328299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.336868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.336917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.336933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.346049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.346104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.355133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.355193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.363849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.363880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.363897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.082 [2024-07-25 04:16:05.372758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.082 [2024-07-25 04:16:05.372793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.082 [2024-07-25 04:16:05.372812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.382320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.382351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.382384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.391168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.391197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.391227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.400488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.400535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.400552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.409588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.409632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.418668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.418699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.418729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.427612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.427659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.427676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.436643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.436677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.436696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.445377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.445407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.445438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.454596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.454627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.454660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.463445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.463476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.463508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.472543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.472573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.472604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.481531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.481575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.481592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.490317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.490363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.499750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.499782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.499799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.508491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.508537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.508554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.517858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.517888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.517918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.526962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.526991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.527022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.536343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.536386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.536403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.545342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.545386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.545402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.554495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.554556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.563396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.563426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.563443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.572518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.572548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.572565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.581875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.581921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.581939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.591101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.591146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.591162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.600165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.600199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.600219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.609651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.609686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.609705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.619423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.619453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.619470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.340 [2024-07-25 04:16:05.628630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.340 [2024-07-25 04:16:05.628666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.340 [2024-07-25 04:16:05.628685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.637316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.637347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.646249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.646280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.655538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.655570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.666056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.666086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.666103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.676677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.676721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.676738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.687496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.687529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.687546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.698538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.698573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.698602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.709402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.709433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.709452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.720588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.720619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.720637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.731299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.731331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.741084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.741140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.750639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.750672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.750698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.760295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.760327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.760350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.770009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.770039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.770059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.780127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.780158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.780190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.790102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.790133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.790151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.800018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.800053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.800072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.809040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.809071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.809090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.818734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.818769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.818789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.827522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.827564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.827582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.832986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.833016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.833034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.842987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.843036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.843056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.852284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.852330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.852348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.598 [2024-07-25 04:16:05.860948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1258390) 00:32:50.598 [2024-07-25 04:16:05.860977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.598 [2024-07-25 04:16:05.860994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.598 00:32:50.598 Latency(us) 00:32:50.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:50.598 nvme0n1 : 2.00 3485.06 435.63 0.00 0.00 4584.97 1280.38 11262.48 00:32:50.598 =================================================================================================================== 00:32:50.598 Total : 3485.06 435.63 0.00 0.00 4584.97 1280.38 11262.48 00:32:50.598 0 00:32:50.598 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:50.598 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:50.598 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:50.598 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:50.598 | .driver_specific 00:32:50.598 | .nvme_error 00:32:50.598 | .status_code 00:32:50.598 | .command_transient_transport_error' 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 976675 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 976675 ']' 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 976675 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 976675 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 976675' 00:32:50.855 killing process with pid 976675 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 976675 00:32:50.855 Received shutdown signal, test time was about 2.000000 seconds 00:32:50.855 00:32:50.855 Latency(us) 00:32:50.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.855 =================================================================================================================== 00:32:50.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:50.855 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 976675 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=977115 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 977115 /var/tmp/bperf.sock 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 977115 ']' 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:51.112 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:51.113 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:51.113 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.113 [2024-07-25 04:16:06.396395] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:51.113 [2024-07-25 04:16:06.396486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977115 ] 00:32:51.370 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.370 [2024-07-25 04:16:06.431118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:51.370 [2024-07-25 04:16:06.462213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.370 [2024-07-25 04:16:06.554124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.370 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.370 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:51.370 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:51.370 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.627 04:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:52.191 nvme0n1 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:52.191 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:52.191 Running I/O for 2 seconds... 00:32:52.191 [2024-07-25 04:16:07.480289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e5ec8 00:32:52.191 [2024-07-25 04:16:07.481728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.191 [2024-07-25 04:16:07.481769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.492027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fdeb0 00:32:52.449 [2024-07-25 04:16:07.493214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.493266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.505322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e4578 00:32:52.449 [2024-07-25 04:16:07.506430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.506460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.518656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fef90 00:32:52.449 [2024-07-25 04:16:07.519792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.519834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.532046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e38d0 00:32:52.449 [2024-07-25 04:16:07.533157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.533191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.545417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fd208 00:32:52.449 [2024-07-25 04:16:07.546470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.546504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.559635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190df988 00:32:52.449 [2024-07-25 04:16:07.560790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.560824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.572222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190de038 00:32:52.449 [2024-07-25 04:16:07.573369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.573400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.584829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e8088 00:32:52.449 [2024-07-25 04:16:07.585930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.597458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190eaab8 00:32:52.449 [2024-07-25 04:16:07.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.449 [2024-07-25 04:16:07.598597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:52.449 [2024-07-25 04:16:07.610000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190ebb98 00:32:52.449 [2024-07-25 04:16:07.611088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.611122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.622612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190f92c0 00:32:52.450 [2024-07-25 04:16:07.623682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.623715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.637845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fac10 00:32:52.450 [2024-07-25 04:16:07.639487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.639517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.650411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fb480 00:32:52.450 [2024-07-25 04:16:07.651990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.652023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.662895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190ff3c8 00:32:52.450 [2024-07-25 04:16:07.664495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.664529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.675513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190feb58 00:32:52.450 [2024-07-25 04:16:07.677118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.677151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.688098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190fd208 00:32:52.450 [2024-07-25 04:16:07.689657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.689691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.700742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190df118 00:32:52.450 [2024-07-25 04:16:07.702287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.702332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.713229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190de8a8 00:32:52.450 [2024-07-25 04:16:07.714755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.714789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.725841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e6300 00:32:52.450 [2024-07-25 04:16:07.727362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.727396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:52.450 [2024-07-25 04:16:07.738419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e6fa8 00:32:52.450 [2024-07-25 04:16:07.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.450 [2024-07-25 04:16:07.739950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.751411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.754774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.754807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.765327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.765688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.779288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.779612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.779641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.793409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.793759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.807386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.807805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.807837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.821366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.821602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.835309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.835532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.835561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.849253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.849575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.849618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.863197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.863456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.863580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.877230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.877565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.877593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.891223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.891582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.905161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.905506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.905536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.919478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.919821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.919850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.933588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.933927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.933956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.947558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.947884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.947913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.961632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.961861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.961889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.975674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.976005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.976034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:07.989745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:07.990083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:07.990112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.708 [2024-07-25 04:16:08.003907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.708 [2024-07-25 04:16:08.004250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.708 [2024-07-25 04:16:08.004280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.018110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.018455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.018485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.032173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.032500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.032545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.046219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.046553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.046582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.060201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.060468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.060569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.074188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.074538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.074568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.088224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.088582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.088624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.102289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.102589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.102635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.116185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.116489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.116519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.130469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.965 [2024-07-25 04:16:08.130750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.965 [2024-07-25 04:16:08.130778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.965 [2024-07-25 04:16:08.144309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.144523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.158258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.158575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.158604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.172259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.172578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.172606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.186151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.186485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.186515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.200162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.200480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.214041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.214363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.214392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.228036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.228345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.228379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.242074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.242391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.242420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:52.966 [2024-07-25 04:16:08.256110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:52.966 [2024-07-25 04:16:08.256434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.966 [2024-07-25 04:16:08.256463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.270402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.270678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.270720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.284340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.284647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.284676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.298236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.298575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.298604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.312235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.312581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.312628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.326253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.326596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.326625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.340228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.340543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.354180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.223 [2024-07-25 04:16:08.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.223 [2024-07-25 04:16:08.354568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.223 [2024-07-25 04:16:08.368221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.368560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.368600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.382180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.382473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.382502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.396171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.396481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.396524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.410179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.410499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.410529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.424330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.424632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.424661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.438301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.438579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.438622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.452328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.452595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.452624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.466293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.466614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.466655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.480305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.480610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.480638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.494262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.494598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.494626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.224 [2024-07-25 04:16:08.508181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.224 [2024-07-25 04:16:08.508495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.224 [2024-07-25 04:16:08.508525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.522357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.522671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.522701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.536364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.536667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.536696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.550342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.550635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.550664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.564188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.564481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.564511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.578238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.578582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.578642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.592184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.592520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.592555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.606257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.606586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.620306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.620611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.620639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.634324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.634712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.648303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.648593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.648622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.662218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.662548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.662577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.676229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.676544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.676588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.690219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.690552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.690581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.704190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.704519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.482 [2024-07-25 04:16:08.704574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.482 [2024-07-25 04:16:08.718237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.482 [2024-07-25 04:16:08.718589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.483 [2024-07-25 04:16:08.718618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.483 [2024-07-25 04:16:08.732217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.483 [2024-07-25 04:16:08.732554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.483 [2024-07-25 04:16:08.732598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.483 [2024-07-25 04:16:08.746195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.483 [2024-07-25 04:16:08.746516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.483 [2024-07-25 04:16:08.746545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.483 [2024-07-25 04:16:08.760229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.483 [2024-07-25 04:16:08.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.483 [2024-07-25 04:16:08.760589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.483 [2024-07-25 04:16:08.774239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.483 [2024-07-25 04:16:08.774504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.483 [2024-07-25 04:16:08.774531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.788715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.789069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.802621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.802867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.802894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.816585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.816907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.816935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.830566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.830795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.830822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.844427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.844732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.858278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.858519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.858643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.872356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.872640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.872669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.886201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.886566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.886595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.900230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.900562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.900591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.914157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.914457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.914486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.928215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.928550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.928579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.942272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.942618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.942660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.956255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.956570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.956622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.970161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.970463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.970492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.984132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.984469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.984519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:08.998198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:08.998512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:08.998540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:09.012174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:09.012407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:09.012514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.741 [2024-07-25 04:16:09.026208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:53.741 [2024-07-25 04:16:09.026539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.741 [2024-07-25 04:16:09.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:53.999 [2024-07-25 04:16:09.040498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.040811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.040840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.054601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.054848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.054876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.068532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.068889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.068918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.082564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.082885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.082914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.096612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.096924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.096952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.110614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.110931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.110960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.124706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.125024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.125053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.138723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.138974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.139002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.152798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.153154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.166648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.166960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.166989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.180459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.180750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.180779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.194411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.194694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.194723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.208185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.208558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.222164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.222507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.222536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.236233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.236553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.236603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.250133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.250493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.250531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.264238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.264604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.264632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.278106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.278406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.278454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.000 [2024-07-25 04:16:09.291889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.000 [2024-07-25 04:16:09.292222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.000 [2024-07-25 04:16:09.292257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.306191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.306502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.306531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.320140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.320484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.320518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.334053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.334439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.348087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.348400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.348429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.361880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.362128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.362259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.376197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.376497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.376615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.390356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.390659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.390688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.404305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.404630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.404659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.418195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.418531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.418578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.432107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.432378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.446170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.446523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.446553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 [2024-07-25 04:16:09.460408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33940) with pdu=0x2000190e23b8 00:32:54.259 [2024-07-25 04:16:09.460651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.259 [2024-07-25 04:16:09.460680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:54.259 00:32:54.259 Latency(us) 00:32:54.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.259 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.259 nvme0n1 : 2.01 18379.16 71.79 0.00 0.00 6947.33 3495.25 18447.17 00:32:54.259 =================================================================================================================== 00:32:54.259 Total : 18379.16 71.79 0.00 0.00 6947.33 3495.25 18447.17 00:32:54.259 0 00:32:54.259 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:54.259 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:54.259 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:54.259 | .driver_specific 00:32:54.259 | .nvme_error 00:32:54.259 | .status_code 00:32:54.259 | .command_transient_transport_error' 00:32:54.259 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 977115 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 977115 ']' 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 977115 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 977115 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 977115' 00:32:54.517 killing process with pid 977115 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 977115 00:32:54.517 Received shutdown signal, test time was about 2.000000 seconds 00:32:54.517 00:32:54.517 Latency(us) 00:32:54.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.517 =================================================================================================================== 00:32:54.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.517 04:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 977115 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=977525 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 977525 /var/tmp/bperf.sock 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 977525 ']' 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.775 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.775 [2024-07-25 04:16:10.057554] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:32:54.775 [2024-07-25 04:16:10.057668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977525 ] 00:32:54.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:54.775 Zero copy mechanism will not be used. 00:32:55.032 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.033 [2024-07-25 04:16:10.093400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:55.033 [2024-07-25 04:16:10.120699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.033 [2024-07-25 04:16:10.208468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.033 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.033 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:55.033 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.033 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.291 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.858 nvme0n1 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:55.858 04:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.858 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:55.858 Zero copy mechanism will not be used. 00:32:55.858 Running I/O for 2 seconds... 00:32:55.858 [2024-07-25 04:16:11.116327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:55.858 [2024-07-25 04:16:11.116696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.858 [2024-07-25 04:16:11.116735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.858 [2024-07-25 04:16:11.125980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:55.858 [2024-07-25 04:16:11.126356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.858 [2024-07-25 04:16:11.126387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.858 [2024-07-25 04:16:11.137410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:55.858 [2024-07-25 04:16:11.137802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.858 [2024-07-25 04:16:11.137835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.858 [2024-07-25 04:16:11.148626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:55.858 [2024-07-25 04:16:11.148998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.858 [2024-07-25 04:16:11.149031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.116 [2024-07-25 04:16:11.160249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.116 [2024-07-25 04:16:11.160629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-07-25 04:16:11.160661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.116 [2024-07-25 04:16:11.171186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.116 [2024-07-25 04:16:11.171573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-07-25 04:16:11.171605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.182694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.183076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.183108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.193122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.193491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.193521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.204726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.205098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.205130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.215474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.215853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.215886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.227000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.227368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.237364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.237726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.237768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.248876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.249239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.259338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.259667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.270107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.270468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.281164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.281512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.281558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.292154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.292514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.292543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.303058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.303453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.303497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.313202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.313571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.313599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.323907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.324283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.324327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.334843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.335214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.345483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.345822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.345864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.356577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.356926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.356954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.367539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.367887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.367937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.378366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.378710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.378755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.388808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.389151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.389178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.399865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.400206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.400256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.117 [2024-07-25 04:16:11.411347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.117 [2024-07-25 04:16:11.411509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-07-25 04:16:11.411537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.422194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.422596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.422639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.433790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.434155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.434182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.445337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.445673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.445701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.455835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.456004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.466381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.466707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.466735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.476437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.476544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.476572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.486782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.487161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.487189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.497365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.497708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.497753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.506781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.507114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.507143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.517750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.518076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.528694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.529060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.539362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.539721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.539748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.549775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.550135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.550163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.561339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.561660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.561689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.573347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.573710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.573754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.583891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.584299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.376 [2024-07-25 04:16:11.584327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.376 [2024-07-25 04:16:11.594109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.376 [2024-07-25 04:16:11.594450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.594478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.605101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.605455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.605500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.615897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.616282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.616310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.626430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.626806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.626834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.636458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.636614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.636642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.646542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.646839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.657061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.657489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.377 [2024-07-25 04:16:11.667572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.377 [2024-07-25 04:16:11.667919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.377 [2024-07-25 04:16:11.667964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.678604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.678762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.678790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.689153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.689490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.689518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.699922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.700259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.700288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.710015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.710352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.710387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.719738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.719970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.719998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.728859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.738679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.738961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.747937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.635 [2024-07-25 04:16:11.748217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.635 [2024-07-25 04:16:11.748252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.635 [2024-07-25 04:16:11.757606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.757892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.757921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.766214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.766573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.766602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.775146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.775556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.784411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.784827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.784855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.793771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.794100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.794128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.803004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.803457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.803485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.812018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.812299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.812327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.821775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.822160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.831464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.831857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.831885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.841681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.842034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.842063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.850680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.851035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.851063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.860146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.860559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.869950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.870320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.870350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.879651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.879988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.880018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.888761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.889076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.889105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.897513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.897863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.907194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.907546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.907576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.917314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.917728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.636 [2024-07-25 04:16:11.926813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.636 [2024-07-25 04:16:11.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.636 [2024-07-25 04:16:11.927168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.896 [2024-07-25 04:16:11.934992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.896 [2024-07-25 04:16:11.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.896 [2024-07-25 04:16:11.935312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.896 [2024-07-25 04:16:11.943831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.944174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.944204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.952590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.953048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.953077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.962414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.962833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.962862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.971770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.972159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.980409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.980827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.980869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.990455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.990812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.990840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:11.998730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:11.999046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:11.999074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.008384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.008692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.008720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.016957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.017348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.026765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.027073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.027101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.036249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.036623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.036652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.044332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.044601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.044629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.053801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.054142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.063268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.063605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.063634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.072010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.072347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.072376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.081493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.081798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.081826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.090513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.090864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.090892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.098504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.098820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.098848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.108191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.108580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.108608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.118368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.118755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.118784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.127557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.127851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.127879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.136381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.136812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.145714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.146072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.154827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.155213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.155247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.164163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.164493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.164521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.173744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.174098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.174126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:56.897 [2024-07-25 04:16:12.182461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.897 [2024-07-25 04:16:12.182807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.897 [2024-07-25 04:16:12.182834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:56.898 [2024-07-25 04:16:12.191770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:56.898 [2024-07-25 04:16:12.192068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.898 [2024-07-25 04:16:12.192097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.201045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.201392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.201420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.210313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.210574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.210602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.219299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.219608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.219636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.227723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.228135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.228164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.236947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.237258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.237286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.245951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.246294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.246323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.254531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.254864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.254893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.263531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.263846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.263874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.271764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.156 [2024-07-25 04:16:12.272075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.156 [2024-07-25 04:16:12.272103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.156 [2024-07-25 04:16:12.280908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.281267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.289395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.289721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.298805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.299194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.299222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.307891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.308236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.308271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.316694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.316966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.316994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.326040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.326392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.326420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.335567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.335935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.345185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.345533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.345562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.354212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.354562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.354590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.363974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.364332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.364360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.373607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.373906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.382743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.383104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.391858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.392151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.392180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.400575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.400920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.410129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.410526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.419612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.420075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.420103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.429471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.429770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.429798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.438495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.438908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.157 [2024-07-25 04:16:12.446662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.157 [2024-07-25 04:16:12.446938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.157 [2024-07-25 04:16:12.446967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.456262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.456621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.456649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.463828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.464164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.464192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.473949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.474284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.474312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.483759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.484087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.484115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.493431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.493772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.493801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.502558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.502878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.502907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.512299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.512677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.521565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.521923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.521951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.531579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.531879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.531907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.541380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.541728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.541757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.551107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.551465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.551495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.559848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.560239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.560285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.569628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.569970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.578364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.578695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.578724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.587880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.588237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.588272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.597439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.597696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.597724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.605876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.606219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.606254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.614839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.615201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.615240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.624103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.624452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.416 [2024-07-25 04:16:12.624480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.416 [2024-07-25 04:16:12.633721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.416 [2024-07-25 04:16:12.634094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.634122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.642375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.642717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.642746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.651575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.651851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.651878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.660150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.660496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.660525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.668883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.669293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.677143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.677564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.677592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.686976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.687298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.687326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.695970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.696274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.696310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.417 [2024-07-25 04:16:12.704843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.417 [2024-07-25 04:16:12.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.417 [2024-07-25 04:16:12.705179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.714791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.715097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.715126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.724225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.724584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.733903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.734310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.734339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.743492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.743834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.743862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.752967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.753286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.753323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.763060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.763392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.675 [2024-07-25 04:16:12.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.675 [2024-07-25 04:16:12.772866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.675 [2024-07-25 04:16:12.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.773235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.782856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.783129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.792626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.792998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.793026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.802399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.802734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.811927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.812264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.812293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.821749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.822093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.822122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.831219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.831640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.831668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.840153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.840484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.849450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.849759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.849787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.859633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.859942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.859970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.869175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.869452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.869481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.878278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.878635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.878663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.888300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.888683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.888711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.898563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.898978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.899006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.909093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.909453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.909483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.919014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.919350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.919378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.928853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.929150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.938372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.938709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.938737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.948353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.948684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.948712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.958731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.959162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.676 [2024-07-25 04:16:12.968134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.676 [2024-07-25 04:16:12.968501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.676 [2024-07-25 04:16:12.968538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:12.978766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:12.979196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:12.979223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:12.988958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:12.989293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:12.989322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:12.999160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:12.999493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:12.999524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.008736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.009047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.009075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.018641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.019142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.029082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.029509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.029543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.039149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.039469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.039499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.048761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.049077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.049106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.058682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.059003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.059032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.068207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.068688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.068717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.078037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.078336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.078365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.087260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.087643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.087685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.096725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.097095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.097123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.935 [2024-07-25 04:16:13.106776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa355c0) with pdu=0x2000190fef90 00:32:57.935 [2024-07-25 04:16:13.107107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.935 [2024-07-25 04:16:13.107136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.935 00:32:57.935 Latency(us) 00:32:57.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.935 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:57.935 nvme0n1 : 2.01 3174.37 396.80 0.00 0.00 5028.74 3519.53 12524.66 00:32:57.935 =================================================================================================================== 00:32:57.935 Total : 3174.37 396.80 0.00 0.00 5028.74 3519.53 12524.66 00:32:57.935 0 00:32:57.935 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:57.935 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:57.935 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:57.935 | .driver_specific 00:32:57.935 | .nvme_error 00:32:57.935 | .status_code 00:32:57.935 | .command_transient_transport_error' 00:32:57.935 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:58.193 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 977525 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 977525 ']' 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 977525 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 977525 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 977525' 00:32:58.194 killing process with pid 977525 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 977525 00:32:58.194 Received shutdown signal, test time was about 2.000000 seconds 00:32:58.194 00:32:58.194 Latency(us) 00:32:58.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.194 =================================================================================================================== 00:32:58.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.194 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 977525 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 976161 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 976161 ']' 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 976161 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 976161 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 976161' 00:32:58.452 killing process with pid 976161 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 976161 00:32:58.452 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 976161 00:32:58.710 00:32:58.710 real 0m15.062s 00:32:58.710 user 0m28.995s 00:32:58.710 sys 0m4.414s 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.710 ************************************ 00:32:58.710 END TEST nvmf_digest_error 00:32:58.710 ************************************ 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:58.710 rmmod nvme_tcp 00:32:58.710 rmmod nvme_fabrics 00:32:58.710 rmmod nvme_keyring 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 976161 ']' 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 976161 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 976161 ']' 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 976161 00:32:58.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (976161) - No such process 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 976161 is not found' 00:32:58.710 Process with pid 976161 is not found 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.710 04:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:01.239 00:33:01.239 real 0m34.591s 00:33:01.239 user 1m0.058s 00:33:01.239 sys 0m10.045s 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.239 ************************************ 00:33:01.239 END TEST nvmf_digest 00:33:01.239 ************************************ 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.239 ************************************ 00:33:01.239 START TEST nvmf_bdevperf 00:33:01.239 ************************************ 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:01.239 * Looking for test storage... 00:33:01.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.239 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:01.240 04:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:03.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:03.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:03.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:03.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:03.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:33:03.180 00:33:03.180 --- 10.0.0.2 ping statistics --- 00:33:03.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.180 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:33:03.180 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:33:03.180 00:33:03.180 --- 10.0.0.1 ping statistics --- 00:33:03.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.181 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=979868 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 979868 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 979868 ']' 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.181 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.181 [2024-07-25 04:16:18.268105] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:03.181 [2024-07-25 04:16:18.268212] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.181 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.181 [2024-07-25 04:16:18.307110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.181 [2024-07-25 04:16:18.339182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:03.181 [2024-07-25 04:16:18.435038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.181 [2024-07-25 04:16:18.435091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.181 [2024-07-25 04:16:18.435117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.181 [2024-07-25 04:16:18.435131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.181 [2024-07-25 04:16:18.435143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.181 [2024-07-25 04:16:18.435238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.181 [2024-07-25 04:16:18.435351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.181 [2024-07-25 04:16:18.435355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 [2024-07-25 04:16:18.565279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 Malloc0 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.439 [2024-07-25 04:16:18.622471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:03.439 { 00:33:03.439 "params": { 00:33:03.439 "name": "Nvme$subsystem", 00:33:03.439 "trtype": "$TEST_TRANSPORT", 00:33:03.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.439 "adrfam": "ipv4", 00:33:03.439 "trsvcid": "$NVMF_PORT", 00:33:03.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.439 "hdgst": ${hdgst:-false}, 00:33:03.439 "ddgst": ${ddgst:-false} 00:33:03.439 }, 00:33:03.439 "method": "bdev_nvme_attach_controller" 00:33:03.439 } 00:33:03.439 EOF 00:33:03.439 )") 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:03.439 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:03.440 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:03.440 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:03.440 "params": { 00:33:03.440 "name": "Nvme1", 00:33:03.440 "trtype": "tcp", 00:33:03.440 "traddr": "10.0.0.2", 00:33:03.440 "adrfam": "ipv4", 00:33:03.440 "trsvcid": "4420", 00:33:03.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:03.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:03.440 "hdgst": false, 00:33:03.440 "ddgst": false 00:33:03.440 }, 00:33:03.440 "method": "bdev_nvme_attach_controller" 00:33:03.440 }' 00:33:03.440 [2024-07-25 04:16:18.667734] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:03.440 [2024-07-25 04:16:18.667822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979899 ] 00:33:03.440 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.440 [2024-07-25 04:16:18.700665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.440 [2024-07-25 04:16:18.729863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.697 [2024-07-25 04:16:18.817790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.954 Running I/O for 1 seconds... 00:33:04.885 00:33:04.885 Latency(us) 00:33:04.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.885 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:04.885 Verification LBA range: start 0x0 length 0x4000 00:33:04.885 Nvme1n1 : 1.02 8141.56 31.80 0.00 0.00 15660.32 2949.12 16990.81 00:33:04.885 =================================================================================================================== 00:33:04.885 Total : 8141.56 31.80 0.00 0.00 15660.32 2949.12 16990.81 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=980157 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:05.142 { 00:33:05.142 "params": { 00:33:05.142 "name": "Nvme$subsystem", 00:33:05.142 "trtype": "$TEST_TRANSPORT", 00:33:05.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.142 "adrfam": "ipv4", 00:33:05.142 "trsvcid": "$NVMF_PORT", 00:33:05.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.142 "hdgst": ${hdgst:-false}, 00:33:05.142 "ddgst": ${ddgst:-false} 00:33:05.142 }, 00:33:05.142 "method": "bdev_nvme_attach_controller" 00:33:05.142 } 00:33:05.142 EOF 00:33:05.142 )") 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:05.142 04:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:05.142 "params": { 00:33:05.142 "name": "Nvme1", 00:33:05.142 "trtype": "tcp", 00:33:05.142 "traddr": "10.0.0.2", 00:33:05.142 "adrfam": "ipv4", 00:33:05.142 "trsvcid": "4420", 00:33:05.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.142 "hdgst": false, 00:33:05.142 "ddgst": false 00:33:05.142 }, 00:33:05.142 "method": "bdev_nvme_attach_controller" 00:33:05.142 }' 00:33:05.142 [2024-07-25 04:16:20.391009] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:05.142 [2024-07-25 04:16:20.391111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980157 ] 00:33:05.142 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.142 [2024-07-25 04:16:20.423420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:05.400 [2024-07-25 04:16:20.451980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.400 [2024-07-25 04:16:20.535449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.657 Running I/O for 15 seconds... 00:33:08.192 04:16:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 979868 00:33:08.192 04:16:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:08.192 [2024-07-25 04:16:23.361483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.192 [2024-07-25 04:16:23.361554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.192 [2024-07-25 04:16:23.361602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.192 [2024-07-25 04:16:23.361623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.192 [2024-07-25 04:16:23.361644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.192 [2024-07-25 04:16:23.361661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.192 [2024-07-25 04:16:23.361679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.192 [2024-07-25 04:16:23.361697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.192 [2024-07-25 04:16:23.361716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.192 [2024-07-25 04:16:23.361735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.192 [2024-07-25 04:16:23.361755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.192 [2024-07-25 04:16:23.361772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.361979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.361994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.193 [2024-07-25 04:16:23.362168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.362971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.362988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.363004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.363021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.363037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.363070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.363087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.363103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.363120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.193 [2024-07-25 04:16:23.363136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.193 [2024-07-25 04:16:23.363157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.363999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.194 [2024-07-25 04:16:23.364503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.194 [2024-07-25 04:16:23.364519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.364969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.195 [2024-07-25 04:16:23.365753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.195 [2024-07-25 04:16:23.365873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.195 [2024-07-25 04:16:23.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.365905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.196 [2024-07-25 04:16:23.365921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.365938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.196 [2024-07-25 04:16:23.365953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.365970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.196 [2024-07-25 04:16:23.365986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3e60 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.366020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:08.196 [2024-07-25 04:16:23.366034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:08.196 [2024-07-25 04:16:23.366047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33512 len:8 PRP1 0x0 PRP2 0x0 00:33:08.196 [2024-07-25 04:16:23.366061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366125] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a3e60 was disconnected and freed. reset controller. 00:33:08.196 [2024-07-25 04:16:23.366204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.196 [2024-07-25 04:16:23.366239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.196 [2024-07-25 04:16:23.366297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.196 [2024-07-25 04:16:23.366325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.196 [2024-07-25 04:16:23.366356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.196 [2024-07-25 04:16:23.366370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.370205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.370267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.371047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.371093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.371112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.371370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.371616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.371642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.371660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.375265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.384335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.384783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.384816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.384835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.385074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.385329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.385354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.385371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.389112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.398368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.398785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.398818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.398838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.399077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.399332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.399358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.399374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.402932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.412186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.412663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.412691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.412708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.412956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.413201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.413225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.413251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.416815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.426060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.426498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.426531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.426549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.426788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.427031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.427056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.427072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.430634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.439877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.440292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.440324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.440345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.440583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.440826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.440850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.440867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.444429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.453702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.454134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.454167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.196 [2024-07-25 04:16:23.454192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.196 [2024-07-25 04:16:23.454443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.196 [2024-07-25 04:16:23.454688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.196 [2024-07-25 04:16:23.454713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.196 [2024-07-25 04:16:23.454729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.196 [2024-07-25 04:16:23.458288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.196 [2024-07-25 04:16:23.467543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.196 [2024-07-25 04:16:23.467978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.196 [2024-07-25 04:16:23.468012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.197 [2024-07-25 04:16:23.468031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.197 [2024-07-25 04:16:23.468282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.197 [2024-07-25 04:16:23.468526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.197 [2024-07-25 04:16:23.468552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.197 [2024-07-25 04:16:23.468569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.197 [2024-07-25 04:16:23.472142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.197 [2024-07-25 04:16:23.481412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.197 [2024-07-25 04:16:23.481860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.197 [2024-07-25 04:16:23.481892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.197 [2024-07-25 04:16:23.481911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.197 [2024-07-25 04:16:23.482149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.197 [2024-07-25 04:16:23.482407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.197 [2024-07-25 04:16:23.482433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.197 [2024-07-25 04:16:23.482449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.197 [2024-07-25 04:16:23.486006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.495271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.495677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.495711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.495730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.495970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.496225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.496266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.496286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.499840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.509102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.509530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.509565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.509585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.509826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.510070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.510096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.510113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.513683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.522937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.523371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.523405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.523424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.523664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.523909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.523935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.523952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.527522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.536778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.537202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.537235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.537268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.537509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.537753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.537779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.537795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.541361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.550622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.551081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.551114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.551133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.551384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.551628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.551654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.551671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.555228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.564490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.564923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.455 [2024-07-25 04:16:23.564956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.455 [2024-07-25 04:16:23.564976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.455 [2024-07-25 04:16:23.565215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.455 [2024-07-25 04:16:23.565473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.455 [2024-07-25 04:16:23.565499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.455 [2024-07-25 04:16:23.565516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.455 [2024-07-25 04:16:23.569075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.455 [2024-07-25 04:16:23.578351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.455 [2024-07-25 04:16:23.578751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.578784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.578803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.579042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.579300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.579325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.579341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.582896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.592368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.592793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.592825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.592844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.593088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.593345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.593370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.593386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.596940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.606203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.606643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.606676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.606694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.606932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.607175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.607201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.607217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.610786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.620041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.620477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.620510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.620528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.620767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.621010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.621035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.621051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.624622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.633888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.634271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.634305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.634325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.634564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.634809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.634834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.634856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.638432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.647904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.648320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.648354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.648373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.648612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.648855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.648880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.648897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.652463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.661728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.662153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.662186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.662206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.662459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.662703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.662729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.662746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.666311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.675572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.676013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.676045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.676064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.676318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.676561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.676587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.676603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.680159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.689420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.689839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.689877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.689897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.690136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.690393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.690420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.690437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.693997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.703280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.703706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.456 [2024-07-25 04:16:23.703738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.456 [2024-07-25 04:16:23.703757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.456 [2024-07-25 04:16:23.703996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.456 [2024-07-25 04:16:23.704239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.456 [2024-07-25 04:16:23.704276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.456 [2024-07-25 04:16:23.704298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.456 [2024-07-25 04:16:23.707855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.456 [2024-07-25 04:16:23.717128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.456 [2024-07-25 04:16:23.717541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.457 [2024-07-25 04:16:23.717574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.457 [2024-07-25 04:16:23.717593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.457 [2024-07-25 04:16:23.717832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.457 [2024-07-25 04:16:23.718075] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.457 [2024-07-25 04:16:23.718099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.457 [2024-07-25 04:16:23.718115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.457 [2024-07-25 04:16:23.721683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.457 [2024-07-25 04:16:23.731155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.457 [2024-07-25 04:16:23.731586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.457 [2024-07-25 04:16:23.731621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.457 [2024-07-25 04:16:23.731640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.457 [2024-07-25 04:16:23.731878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.457 [2024-07-25 04:16:23.732127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.457 [2024-07-25 04:16:23.732153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.457 [2024-07-25 04:16:23.732170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.457 [2024-07-25 04:16:23.735739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.457 [2024-07-25 04:16:23.744987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.457 [2024-07-25 04:16:23.745399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.457 [2024-07-25 04:16:23.745432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.457 [2024-07-25 04:16:23.745451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.457 [2024-07-25 04:16:23.745690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.457 [2024-07-25 04:16:23.745934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.457 [2024-07-25 04:16:23.745959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.457 [2024-07-25 04:16:23.745977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.457 [2024-07-25 04:16:23.749545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.715 [2024-07-25 04:16:23.759011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.715 [2024-07-25 04:16:23.759453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.715 [2024-07-25 04:16:23.759486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.715 [2024-07-25 04:16:23.759505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.715 [2024-07-25 04:16:23.759745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.759989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.760015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.760031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.763601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.772868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.773299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.773333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.773351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.773591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.773833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.773859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.773875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.777452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.786706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.787090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.787142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.787405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.787650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.787676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.787693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.791272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.800530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.800959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.800992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.801011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.801268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.801511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.801536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.801554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.805113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.814374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.814810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.814843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.814862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.815100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.815358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.815385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.815401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.818957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.828204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.828637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.828670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.828694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.828935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.829179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.829204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.829221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.832790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.842046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.842486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.842518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.842537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.842775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.843017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.843043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.843060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.846633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.855883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.856305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.856338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.856357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.856595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.856838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.856864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.856880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.860448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.869717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.870148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.870176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.870192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.870444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.870688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.870718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.870736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.874320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.883578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.884004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.884036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.716 [2024-07-25 04:16:23.884055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.716 [2024-07-25 04:16:23.884308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.716 [2024-07-25 04:16:23.884551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.716 [2024-07-25 04:16:23.884576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.716 [2024-07-25 04:16:23.884593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.716 [2024-07-25 04:16:23.888148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.716 [2024-07-25 04:16:23.897412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.716 [2024-07-25 04:16:23.897839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.716 [2024-07-25 04:16:23.897874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.897894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.898134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.898394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.898421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.898438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.901999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.911258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.911724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.911743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.911982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.912224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.912262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.912281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.915840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.925100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.925541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.925573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.925592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.925830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.926073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.926098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.926115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.929683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.938934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.939368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.939400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.939419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.939658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.939900] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.939926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.939943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.943514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.952767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.953222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.953272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.953290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.953547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.953790] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.953815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.953832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.957399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.966656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.967099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.967131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.967155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.967408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.967652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.967677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.967693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.971257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.980524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.980950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.980983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.981002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.981254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.981500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.981526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.981543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.985098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:23.994359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:23.994785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:23.994817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:23.994836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:23.995074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:23.995332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:23.995358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:23.995374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.717 [2024-07-25 04:16:23.998935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.717 [2024-07-25 04:16:24.008206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.717 [2024-07-25 04:16:24.008617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.717 [2024-07-25 04:16:24.008650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.717 [2024-07-25 04:16:24.008669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.717 [2024-07-25 04:16:24.008908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.717 [2024-07-25 04:16:24.009151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.717 [2024-07-25 04:16:24.009176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.717 [2024-07-25 04:16:24.009198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.012765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.022237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.022674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.022706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.022724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.022963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.023206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.023231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.976 [2024-07-25 04:16:24.023258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.026818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.036075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.036509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.036542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.036560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.036799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.037043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.037068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.976 [2024-07-25 04:16:24.037085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.040645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.049900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.050301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.050334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.050353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.050593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.050837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.050863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.976 [2024-07-25 04:16:24.050880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.054446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.063922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.064428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.064458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.064476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.064734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.064977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.065002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.976 [2024-07-25 04:16:24.065017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.068584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.077876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.078312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.078345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.078364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.078603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.078847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.078872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.976 [2024-07-25 04:16:24.078889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.976 [2024-07-25 04:16:24.082465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.976 [2024-07-25 04:16:24.091712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.976 [2024-07-25 04:16:24.092147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.976 [2024-07-25 04:16:24.092180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.976 [2024-07-25 04:16:24.092199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.976 [2024-07-25 04:16:24.092446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.976 [2024-07-25 04:16:24.092691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.976 [2024-07-25 04:16:24.092716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.092732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.096293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.105541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.105939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.105972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.105990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.106235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.106488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.106514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.106530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.110083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.119548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.119977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.120009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.120028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.120275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.120529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.120554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.120570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.124123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.133420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.133856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.133888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.133907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.134146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.134401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.134426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.134443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.138009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.147287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.147726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.147758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.147777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.148015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.148271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.148297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.148319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.151875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.161137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.161576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.161609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.161628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.161866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.162111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.162136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.162153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.165729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.174991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.175449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.175478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.175495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.175751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.175993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.176019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.176036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.179606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.188859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.189288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.189320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.189339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.189578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.189821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.189846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.189862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.193429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.202690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.203126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.203163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.203182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.203435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.203678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.203703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.203720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.207286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.216535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.216958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.216991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.217009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.217260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.217503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.977 [2024-07-25 04:16:24.217530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.977 [2024-07-25 04:16:24.217547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.977 [2024-07-25 04:16:24.221158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.977 [2024-07-25 04:16:24.230422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.977 [2024-07-25 04:16:24.230849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.977 [2024-07-25 04:16:24.230881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.977 [2024-07-25 04:16:24.230899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.977 [2024-07-25 04:16:24.231138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.977 [2024-07-25 04:16:24.231394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.978 [2024-07-25 04:16:24.231421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.978 [2024-07-25 04:16:24.231438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.978 [2024-07-25 04:16:24.234995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.978 [2024-07-25 04:16:24.244262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.978 [2024-07-25 04:16:24.244691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.978 [2024-07-25 04:16:24.244724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.978 [2024-07-25 04:16:24.244744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.978 [2024-07-25 04:16:24.244983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.978 [2024-07-25 04:16:24.245231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.978 [2024-07-25 04:16:24.245270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.978 [2024-07-25 04:16:24.245288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.978 [2024-07-25 04:16:24.248844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.978 [2024-07-25 04:16:24.258093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.978 [2024-07-25 04:16:24.258504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.978 [2024-07-25 04:16:24.258537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.978 [2024-07-25 04:16:24.258557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:08.978 [2024-07-25 04:16:24.258796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:08.978 [2024-07-25 04:16:24.259040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.978 [2024-07-25 04:16:24.259066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.978 [2024-07-25 04:16:24.259082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.978 [2024-07-25 04:16:24.262653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.978 [2024-07-25 04:16:24.272116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.978 [2024-07-25 04:16:24.272548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.978 [2024-07-25 04:16:24.272578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:08.978 [2024-07-25 04:16:24.272596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.272841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.273084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.273109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.273126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.276714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.285968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.286402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.286435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.286454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.286693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.286935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.286962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.286978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.290553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.299808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.300303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.300332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.300348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.300593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.300846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.300872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.300888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.304462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.313710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.314217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.314281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.314300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.314539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.314782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.314807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.314824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.318404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.327651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.328079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.328112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.328131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.328384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.328627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.328653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.328669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.332225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.341481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.341913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.341946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.341980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.342220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.342478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.342504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.342520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.346079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.355386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.355830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.355874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.356120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.356377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.356403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.356420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.359981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.369263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.369673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.369722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.369960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.370203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.370229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.370257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.373834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.383285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.383735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.383770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.383789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.384029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.384285] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.384317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.384334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.387988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.237 [2024-07-25 04:16:24.397295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.237 [2024-07-25 04:16:24.397843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.237 [2024-07-25 04:16:24.397877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.237 [2024-07-25 04:16:24.397896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.237 [2024-07-25 04:16:24.398136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.237 [2024-07-25 04:16:24.398392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.237 [2024-07-25 04:16:24.398418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.237 [2024-07-25 04:16:24.398435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.237 [2024-07-25 04:16:24.402001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.411295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.411722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.411749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.411765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.412002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.412258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.412297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.412311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.415846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.425330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.425751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.425784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.425802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.426040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.426298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.426324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.426341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.429901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.439181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.439594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.439627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.439646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.439885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.440129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.440153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.440169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.443738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.453210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.453636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.453670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.453689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.453928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.454172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.454198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.454214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.457789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.467059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.467502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.467535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.467554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.467793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.468036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.468061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.468077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.471481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.480893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.481338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.481368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.481385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.481638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.481903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.481928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.481944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.485560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.494925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.495370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.495400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.495417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.495660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.495904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.495929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.495946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.499541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.508847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.509291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.509331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.509347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.509594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.509840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.509866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.509882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.513409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.238 [2024-07-25 04:16:24.522816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.238 [2024-07-25 04:16:24.523253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.238 [2024-07-25 04:16:24.523286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.238 [2024-07-25 04:16:24.523305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.238 [2024-07-25 04:16:24.523544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.238 [2024-07-25 04:16:24.523786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.238 [2024-07-25 04:16:24.523811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.238 [2024-07-25 04:16:24.523832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.238 [2024-07-25 04:16:24.527397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.497 [2024-07-25 04:16:24.536647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.497 [2024-07-25 04:16:24.537072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.497 [2024-07-25 04:16:24.537105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.497 [2024-07-25 04:16:24.537123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.497 [2024-07-25 04:16:24.537374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.497 [2024-07-25 04:16:24.537616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.497 [2024-07-25 04:16:24.537642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.497 [2024-07-25 04:16:24.537659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.497 [2024-07-25 04:16:24.541196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.497 [2024-07-25 04:16:24.550232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.497 [2024-07-25 04:16:24.550608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.497 [2024-07-25 04:16:24.550637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.497 [2024-07-25 04:16:24.550654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.497 [2024-07-25 04:16:24.550869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.497 [2024-07-25 04:16:24.551088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.497 [2024-07-25 04:16:24.551111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.497 [2024-07-25 04:16:24.551125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.497 [2024-07-25 04:16:24.554249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.497 [2024-07-25 04:16:24.563395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.497 [2024-07-25 04:16:24.563804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.497 [2024-07-25 04:16:24.563834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.497 [2024-07-25 04:16:24.563851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.497 [2024-07-25 04:16:24.564104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.497 [2024-07-25 04:16:24.564326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.497 [2024-07-25 04:16:24.564348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.497 [2024-07-25 04:16:24.564362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.497 [2024-07-25 04:16:24.567303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.497 [2024-07-25 04:16:24.576689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.497 [2024-07-25 04:16:24.577111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.497 [2024-07-25 04:16:24.577139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.497 [2024-07-25 04:16:24.577156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.497 [2024-07-25 04:16:24.577420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.497 [2024-07-25 04:16:24.577633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.577654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.577668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.580648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.589847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.590253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.590283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.590300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.590541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.590766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.590788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.590801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.593744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.603185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.603602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.603633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.603650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.603902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.604098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.604119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.604132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.607149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.616369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.616839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.616869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.616886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.617142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.617366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.617388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.617403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.620342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.629871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.630298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.630329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.630346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.630576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.630798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.630820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.630834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.634008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.643045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.643417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.643447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.643464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.643702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.643897] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.643918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.643932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.646919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.656323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.656730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.656760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.656777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.657031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.657270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.657292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.657310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.660238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.669626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.670015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.670045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.670062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.670327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.670541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.670562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.670576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.673502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.682894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.683313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.683342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.683358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.683595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.683804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.683825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.683838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.686787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.696142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.696577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.696606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.696622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.498 [2024-07-25 04:16:24.696868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.498 [2024-07-25 04:16:24.697062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.498 [2024-07-25 04:16:24.697083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.498 [2024-07-25 04:16:24.697096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.498 [2024-07-25 04:16:24.700052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.498 [2024-07-25 04:16:24.709459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.498 [2024-07-25 04:16:24.709875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.498 [2024-07-25 04:16:24.709909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.498 [2024-07-25 04:16:24.709926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.710175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.710397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.710419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.710433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.713381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.722772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.723163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.723192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.723208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.723447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.723673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.723695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.723708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.726690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.736057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.736465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.736496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.736513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.736763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.736957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.736978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.736991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.739972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.749348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.749755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.749783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.749799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.750033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.750273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.750295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.750310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.753232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.762530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.762935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.762964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.762982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.763235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.763460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.763482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.763496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.766499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.775804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.776175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.776203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.776219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.776452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.776684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.776705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.776719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.779661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.499 [2024-07-25 04:16:24.789019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.499 [2024-07-25 04:16:24.789390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.499 [2024-07-25 04:16:24.789419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.499 [2024-07-25 04:16:24.789436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.499 [2024-07-25 04:16:24.789668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.499 [2024-07-25 04:16:24.789901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.499 [2024-07-25 04:16:24.789923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.499 [2024-07-25 04:16:24.789937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.499 [2024-07-25 04:16:24.793258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.758 [2024-07-25 04:16:24.802350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.758 [2024-07-25 04:16:24.802753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-25 04:16:24.802782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.758 [2024-07-25 04:16:24.802798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.758 [2024-07-25 04:16:24.803033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.758 [2024-07-25 04:16:24.803270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.758 [2024-07-25 04:16:24.803292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.758 [2024-07-25 04:16:24.803307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.758 [2024-07-25 04:16:24.806269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.758 [2024-07-25 04:16:24.815482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.758 [2024-07-25 04:16:24.815930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-25 04:16:24.815959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.758 [2024-07-25 04:16:24.815976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.758 [2024-07-25 04:16:24.816229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.758 [2024-07-25 04:16:24.816453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.758 [2024-07-25 04:16:24.816474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.758 [2024-07-25 04:16:24.816488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.758 [2024-07-25 04:16:24.819427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.758 [2024-07-25 04:16:24.828711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.758 [2024-07-25 04:16:24.829093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-25 04:16:24.829121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.829136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.829361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.829576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.829597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.829611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.832537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.841938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.842326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.842356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.842378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.842630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.842824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.842845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.842858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.845804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.855164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.855577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.855607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.855624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.855875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.856069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.856090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.856103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.859087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.868441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.868908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.868937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.868953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.869186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.869409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.869431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.869445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.872383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.881767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.882216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.882252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.882270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.882527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.882736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.882762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.882778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.885939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.895079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.895475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.895504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.895520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.895774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.895968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.895989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.896002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.898983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.908419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.908866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.908896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.908913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.909149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.909374] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.909396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.909410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.912350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.921726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.922108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.922137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.922153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.922438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.922651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.922672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.922685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.925668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.934907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.935352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.935400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.935654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.935848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.935869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.935883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.938906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.759 [2024-07-25 04:16:24.948092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.759 [2024-07-25 04:16:24.948482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-25 04:16:24.948512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.759 [2024-07-25 04:16:24.948529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.759 [2024-07-25 04:16:24.948777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.759 [2024-07-25 04:16:24.948970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.759 [2024-07-25 04:16:24.948991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.759 [2024-07-25 04:16:24.949005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.759 [2024-07-25 04:16:24.951949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:24.961374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:24.961741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:24.961768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:24.961784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:24.961985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:24.962211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:24.962232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:24.962254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:24.965183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:24.974585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:24.975000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:24.975029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:24.975045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:24.975296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:24.975496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:24.975517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:24.975531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:24.978460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:24.987810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:24.988216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:24.988265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:24.988282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:24.988519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:24.988729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:24.988750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:24.988764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:24.991733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:25.000994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:25.001377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:25.001408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:25.001425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:25.001679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:25.001874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:25.001895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:25.001908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:25.004844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:25.014207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:25.014578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:25.014607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:25.014624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:25.014848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:25.015058] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:25.015079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:25.015097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:25.018046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:25.027512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:25.027977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:25.028006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:25.028023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:25.028287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:25.028493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:25.028515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:25.028529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:25.031428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:25.040803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:25.041159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:25.041188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:25.041205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:09.760 [2024-07-25 04:16:25.041470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:09.760 [2024-07-25 04:16:25.041682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.760 [2024-07-25 04:16:25.041703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.760 [2024-07-25 04:16:25.041716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.760 [2024-07-25 04:16:25.044658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.760 [2024-07-25 04:16:25.054452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.760 [2024-07-25 04:16:25.054906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-25 04:16:25.054936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:09.760 [2024-07-25 04:16:25.054953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.017 [2024-07-25 04:16:25.055181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.017 [2024-07-25 04:16:25.055426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.017 [2024-07-25 04:16:25.055450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.017 [2024-07-25 04:16:25.055465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.017 [2024-07-25 04:16:25.058506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.017 [2024-07-25 04:16:25.067693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.017 [2024-07-25 04:16:25.068148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.017 [2024-07-25 04:16:25.068177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.017 [2024-07-25 04:16:25.068194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.017 [2024-07-25 04:16:25.068440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.017 [2024-07-25 04:16:25.068652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.017 [2024-07-25 04:16:25.068673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.017 [2024-07-25 04:16:25.068687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.017 [2024-07-25 04:16:25.071629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.017 [2024-07-25 04:16:25.080954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.017 [2024-07-25 04:16:25.081339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.017 [2024-07-25 04:16:25.081368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.017 [2024-07-25 04:16:25.081385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.017 [2024-07-25 04:16:25.081633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.017 [2024-07-25 04:16:25.081827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.081848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.081861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.084803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.094153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.094583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.094612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.094629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.094877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.095070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.095091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.095104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.098047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.107442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.107821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.107850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.107866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.108101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.108327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.108358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.108372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.111296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.120688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.121138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.121168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.121185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.121450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.121664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.121685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.121699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.124678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.133903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.134355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.134384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.134401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.134653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.134847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.134868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.134881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.138082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.147160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.147582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.147611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.147628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.147882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.148076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.148096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.148109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.151062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.160488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.160949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.160977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.160994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.161254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.161468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.161489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.161503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.164442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.173866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.174221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.174272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.174290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.174543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.174738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.174758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.174772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.177726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.187088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.187530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.187575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.187592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.187826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.188020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.188040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.188054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.190997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.200282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.200643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.200676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.200693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.200929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.201122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.201143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.201156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.204111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.213579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.213980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.214009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.214026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.214268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.214471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.214491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.214505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.217445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.226874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.227346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.227376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.227393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.227645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.227839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.227860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.227873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.230815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.240179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.240566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.240596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.240613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.240855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.241070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.241091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.241105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.244046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.253471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.253939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.253968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.253984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.254224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.254432] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.254453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.254467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.257408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.266676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.267061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.267090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.267107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.267356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.267557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.267603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.267617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.270564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.279933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.280286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.280330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.280347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.280597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.280806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.280826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.280839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.283780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.293158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.293528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.018 [2024-07-25 04:16:25.293557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.018 [2024-07-25 04:16:25.293573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.018 [2024-07-25 04:16:25.293794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.018 [2024-07-25 04:16:25.294003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.018 [2024-07-25 04:16:25.294024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.018 [2024-07-25 04:16:25.294037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.018 [2024-07-25 04:16:25.296981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.018 [2024-07-25 04:16:25.306376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.018 [2024-07-25 04:16:25.306850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.019 [2024-07-25 04:16:25.306879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.019 [2024-07-25 04:16:25.306895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.019 [2024-07-25 04:16:25.307147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.019 [2024-07-25 04:16:25.307370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.019 [2024-07-25 04:16:25.307391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.019 [2024-07-25 04:16:25.307405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.019 [2024-07-25 04:16:25.310416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.319828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.320215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.320252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.320271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.320510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.320720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.320741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.277 [2024-07-25 04:16:25.320754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.277 [2024-07-25 04:16:25.323743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.333121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.333569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.333598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.333619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.333849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.334043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.334064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.277 [2024-07-25 04:16:25.334077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.277 [2024-07-25 04:16:25.337024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.346445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.346915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.346945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.346961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.347213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.347443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.347466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.277 [2024-07-25 04:16:25.347480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.277 [2024-07-25 04:16:25.350442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.359831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.360283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.360313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.360329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.360571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.360779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.360800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.277 [2024-07-25 04:16:25.360814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.277 [2024-07-25 04:16:25.363757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.373343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.373703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.373733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.373751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.373988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.374181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.374205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.277 [2024-07-25 04:16:25.374219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.277 [2024-07-25 04:16:25.377217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.277 [2024-07-25 04:16:25.386604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.277 [2024-07-25 04:16:25.386995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.277 [2024-07-25 04:16:25.387024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.277 [2024-07-25 04:16:25.387040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.277 [2024-07-25 04:16:25.387284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.277 [2024-07-25 04:16:25.387483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.277 [2024-07-25 04:16:25.387504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.387518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.390639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.399775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.400227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.400263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.400282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.400537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.400747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.400768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.400782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.403691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.412974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.413420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.413450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.413467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.413717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.413911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.413932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.413946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.416897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.426324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.426809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.426838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.426855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.427108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.427329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.427351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.427366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.430307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.439503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.439907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.439936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.439953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.440205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.440428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.440450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.440464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.443405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.452785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.453236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.453272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.453289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.453542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.453736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.453757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.453770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.456714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.465946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.466334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.466363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.466380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.466638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.466832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.466853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.466866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.469810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.479135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.479523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.479568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.479585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.479820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.480014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.480034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.480048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.482994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.492380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.492792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.492821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.492839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.493091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.493328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.493351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.493365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.496327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.505576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.505959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.505988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.506004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.506240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.278 [2024-07-25 04:16:25.506447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.278 [2024-07-25 04:16:25.506468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.278 [2024-07-25 04:16:25.506486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.278 [2024-07-25 04:16:25.509481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.278 [2024-07-25 04:16:25.518885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.278 [2024-07-25 04:16:25.519269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.278 [2024-07-25 04:16:25.519297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.278 [2024-07-25 04:16:25.519314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.278 [2024-07-25 04:16:25.519551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.279 [2024-07-25 04:16:25.519761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.279 [2024-07-25 04:16:25.519781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.279 [2024-07-25 04:16:25.519794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.279 [2024-07-25 04:16:25.522759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.279 [2024-07-25 04:16:25.532070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.279 [2024-07-25 04:16:25.532489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.279 [2024-07-25 04:16:25.532519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.279 [2024-07-25 04:16:25.532536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.279 [2024-07-25 04:16:25.532787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.279 [2024-07-25 04:16:25.532981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.279 [2024-07-25 04:16:25.533002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.279 [2024-07-25 04:16:25.533015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.279 [2024-07-25 04:16:25.535978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.279 [2024-07-25 04:16:25.545296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.279 [2024-07-25 04:16:25.545752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.279 [2024-07-25 04:16:25.545779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.279 [2024-07-25 04:16:25.545795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.279 [2024-07-25 04:16:25.546030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.279 [2024-07-25 04:16:25.546240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.279 [2024-07-25 04:16:25.546283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.279 [2024-07-25 04:16:25.546297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.279 [2024-07-25 04:16:25.549237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.279 [2024-07-25 04:16:25.558486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.279 [2024-07-25 04:16:25.558934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.279 [2024-07-25 04:16:25.558967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.279 [2024-07-25 04:16:25.558983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.279 [2024-07-25 04:16:25.559219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.279 [2024-07-25 04:16:25.559446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.279 [2024-07-25 04:16:25.559468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.279 [2024-07-25 04:16:25.559482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.279 [2024-07-25 04:16:25.562447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.279 [2024-07-25 04:16:25.572212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.279 [2024-07-25 04:16:25.572719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.279 [2024-07-25 04:16:25.572748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.279 [2024-07-25 04:16:25.572766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.279 [2024-07-25 04:16:25.573017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.279 [2024-07-25 04:16:25.573263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.279 [2024-07-25 04:16:25.573295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.279 [2024-07-25 04:16:25.573309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.537 [2024-07-25 04:16:25.576459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.537 [2024-07-25 04:16:25.585481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.537 [2024-07-25 04:16:25.585884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.537 [2024-07-25 04:16:25.585913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.537 [2024-07-25 04:16:25.585929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.537 [2024-07-25 04:16:25.586183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.537 [2024-07-25 04:16:25.586408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.537 [2024-07-25 04:16:25.586429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.537 [2024-07-25 04:16:25.586443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.537 [2024-07-25 04:16:25.589429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.537 [2024-07-25 04:16:25.598719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.537 [2024-07-25 04:16:25.599041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.537 [2024-07-25 04:16:25.599068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.537 [2024-07-25 04:16:25.599085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.599327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.599547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.599568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.599581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.602518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.611987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.612434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.612464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.612481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.612719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.612927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.612947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.612960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.615909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.625370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.625753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.625788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.625813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.626072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.626351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.626380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.626403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.629497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.638565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.639013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.639055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.639072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.639324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.639544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.639565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.639579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.642823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.651824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.652240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.652302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.652319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.652560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.652768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.652789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.652802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.655748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.664995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.665402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.665432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.665448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.665700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.665893] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.665914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.665927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.668905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.678989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.679418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.679451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.679470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.679708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.679950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.679975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.679992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.683561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.692822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.693258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.693292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.693317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.693557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.693802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.693827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.693843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.697408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.706674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.707105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.707138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.707157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.707410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.707654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.707680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.707697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.711260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.720523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.538 [2024-07-25 04:16:25.720970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.538 [2024-07-25 04:16:25.721002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.538 [2024-07-25 04:16:25.721021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.538 [2024-07-25 04:16:25.721270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.538 [2024-07-25 04:16:25.721514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.538 [2024-07-25 04:16:25.721539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.538 [2024-07-25 04:16:25.721561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.538 [2024-07-25 04:16:25.725119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.538 [2024-07-25 04:16:25.734398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.734844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.734876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.734895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.735133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.735387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.735418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.735436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.738999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.748265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.748701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.748734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.748753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.748991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.749233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.749273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.749291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.752857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.762113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.762570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.762599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.762616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.762863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.763107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.763132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.763147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.766669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.776130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.776554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.776596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.776612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.776839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.777082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.777107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.777125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.780735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.789989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.790438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.790471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.790490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.790730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.790974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.790999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.791016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.794585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.803845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.804300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.804330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.804346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.804603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.804847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.804872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.804888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.808457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.817708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.818218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.818276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.818296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.818535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.818777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.818803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.818819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.539 [2024-07-25 04:16:25.822389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.539 [2024-07-25 04:16:25.831639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.539 [2024-07-25 04:16:25.832046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.539 [2024-07-25 04:16:25.832078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.539 [2024-07-25 04:16:25.832104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.539 [2024-07-25 04:16:25.832356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.539 [2024-07-25 04:16:25.832598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.539 [2024-07-25 04:16:25.832624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.539 [2024-07-25 04:16:25.832641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.836199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.845462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.845893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.845924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.845943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.846181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.846438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.846465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.846482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.850039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.859300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.859725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.859757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.859776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.860014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.860270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.860296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.860312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.863867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.873128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.873563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.873596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.873614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.873853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.874095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.874126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.874143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.877710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.886976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.887422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.887454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.887473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.887712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.887955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.887980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.887997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.891563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.900827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.901313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.901345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.901364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.901602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.901844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.901870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.901887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.905461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.914723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.915152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.915184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.915203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.915454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.915697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.915722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.915739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.919303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.928557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.928982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.929016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.798 [2024-07-25 04:16:25.929035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.798 [2024-07-25 04:16:25.929287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.798 [2024-07-25 04:16:25.929530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.798 [2024-07-25 04:16:25.929556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.798 [2024-07-25 04:16:25.929573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.798 [2024-07-25 04:16:25.933129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.798 [2024-07-25 04:16:25.942392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.798 [2024-07-25 04:16:25.942890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.798 [2024-07-25 04:16:25.942919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:25.942936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:25.943189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:25.943446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:25.943472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:25.943489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:25.947046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:25.956308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:25.956869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:25.956922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:25.956941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:25.957179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:25.957436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:25.957463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:25.957479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:25.961035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:25.970301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:25.970853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:25.970905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:25.970923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:25.971167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:25.971422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:25.971449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:25.971466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:25.975022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:25.984295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:25.984842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:25.984893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:25.984913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:25.985151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:25.985408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:25.985434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:25.985451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:25.989004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:25.998260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:25.998763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:25.998812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:25.998831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:25.999069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:25.999327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:25.999353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:25.999370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.002930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.012179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.012752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:26.012806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:26.012826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:26.013064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:26.013321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:26.013348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:26.013369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.016924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.026173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.026710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:26.026763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:26.026781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:26.027020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:26.027278] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:26.027304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:26.027320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.030874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.040120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.040615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:26.040668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:26.040687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:26.040925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:26.041168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:26.041193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:26.041210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.044779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.054035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.054547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:26.054599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:26.054618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:26.054857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:26.055100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:26.055126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:26.055142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.058711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.067963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.068398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.799 [2024-07-25 04:16:26.068436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.799 [2024-07-25 04:16:26.068456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.799 [2024-07-25 04:16:26.068695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.799 [2024-07-25 04:16:26.068937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.799 [2024-07-25 04:16:26.068963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.799 [2024-07-25 04:16:26.068979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.799 [2024-07-25 04:16:26.072546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.799 [2024-07-25 04:16:26.081806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.799 [2024-07-25 04:16:26.082331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.800 [2024-07-25 04:16:26.082366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:10.800 [2024-07-25 04:16:26.082384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:10.800 [2024-07-25 04:16:26.082624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:10.800 [2024-07-25 04:16:26.082866] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.800 [2024-07-25 04:16:26.082892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.800 [2024-07-25 04:16:26.082909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.800 [2024-07-25 04:16:26.086477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.095729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.096288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.058 [2024-07-25 04:16:26.096321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.058 [2024-07-25 04:16:26.096339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.058 [2024-07-25 04:16:26.096578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.058 [2024-07-25 04:16:26.096820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.058 [2024-07-25 04:16:26.096846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.058 [2024-07-25 04:16:26.096862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.058 [2024-07-25 04:16:26.100434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.109689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.110119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.058 [2024-07-25 04:16:26.110152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.058 [2024-07-25 04:16:26.110171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.058 [2024-07-25 04:16:26.110423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.058 [2024-07-25 04:16:26.110673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.058 [2024-07-25 04:16:26.110699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.058 [2024-07-25 04:16:26.110716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.058 [2024-07-25 04:16:26.114279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.123529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.123931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.058 [2024-07-25 04:16:26.123963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.058 [2024-07-25 04:16:26.123982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.058 [2024-07-25 04:16:26.124220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.058 [2024-07-25 04:16:26.124477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.058 [2024-07-25 04:16:26.124504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.058 [2024-07-25 04:16:26.124520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.058 [2024-07-25 04:16:26.128078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.137546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.137973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.058 [2024-07-25 04:16:26.138005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.058 [2024-07-25 04:16:26.138024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.058 [2024-07-25 04:16:26.138274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.058 [2024-07-25 04:16:26.138517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.058 [2024-07-25 04:16:26.138543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.058 [2024-07-25 04:16:26.138559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.058 [2024-07-25 04:16:26.142116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.151393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.151799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.058 [2024-07-25 04:16:26.151831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.058 [2024-07-25 04:16:26.151850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.058 [2024-07-25 04:16:26.152089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.058 [2024-07-25 04:16:26.152345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.058 [2024-07-25 04:16:26.152372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.058 [2024-07-25 04:16:26.152389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.058 [2024-07-25 04:16:26.155951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.058 [2024-07-25 04:16:26.165208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.058 [2024-07-25 04:16:26.165646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.165679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.165698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.165936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.166178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.166204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.166220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.169784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.179037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.179473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.179506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.179525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.179764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.180019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.180045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.180060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.183631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.192880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.193310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.193344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.193363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.193602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.193847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.193873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.193889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.197458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.206742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.207169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.207225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.207477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.207720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.207746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.207763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.211332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.220584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.221030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.221062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.221081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.221334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.221577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.221603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.221620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.225180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.234451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.234855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.234889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.234907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.235147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.235405] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.235431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.235448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.239009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.248277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.248705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.248738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.248757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.248995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.249238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.249283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.249300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.252858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.262128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.262577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.262610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.262629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.262869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.263113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.263140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.263156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.266729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.276014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.276449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.276482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.276501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.276740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.276983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.277008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.277025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.280607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.289860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.290305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.290338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.059 [2024-07-25 04:16:26.290357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.059 [2024-07-25 04:16:26.290596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.059 [2024-07-25 04:16:26.290839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.059 [2024-07-25 04:16:26.290865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.059 [2024-07-25 04:16:26.290881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.059 [2024-07-25 04:16:26.294456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.059 [2024-07-25 04:16:26.303732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.059 [2024-07-25 04:16:26.304158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.059 [2024-07-25 04:16:26.304190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.060 [2024-07-25 04:16:26.304208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.060 [2024-07-25 04:16:26.304458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.060 [2024-07-25 04:16:26.304702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.060 [2024-07-25 04:16:26.304727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.060 [2024-07-25 04:16:26.304744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.060 [2024-07-25 04:16:26.308308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.060 [2024-07-25 04:16:26.317563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.060 [2024-07-25 04:16:26.317997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.060 [2024-07-25 04:16:26.318030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.060 [2024-07-25 04:16:26.318049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.060 [2024-07-25 04:16:26.318298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.060 [2024-07-25 04:16:26.318542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.060 [2024-07-25 04:16:26.318567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.060 [2024-07-25 04:16:26.318584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.060 [2024-07-25 04:16:26.322140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.060 [2024-07-25 04:16:26.331401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.060 [2024-07-25 04:16:26.331894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.060 [2024-07-25 04:16:26.331927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.060 [2024-07-25 04:16:26.331945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.060 [2024-07-25 04:16:26.332184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.060 [2024-07-25 04:16:26.332464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.060 [2024-07-25 04:16:26.332492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.060 [2024-07-25 04:16:26.332509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.060 [2024-07-25 04:16:26.336074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.060 [2024-07-25 04:16:26.345350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.060 [2024-07-25 04:16:26.345778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.060 [2024-07-25 04:16:26.345811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.060 [2024-07-25 04:16:26.345831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.060 [2024-07-25 04:16:26.346076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.060 [2024-07-25 04:16:26.346335] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.060 [2024-07-25 04:16:26.346361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.060 [2024-07-25 04:16:26.346379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.060 [2024-07-25 04:16:26.349939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 979868 Killed "${NVMF_APP[@]}" "$@" 00:33:11.060 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:11.060 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:11.060 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:11.060 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.060 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=980821 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 980821 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 980821 ']' 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.320 [2024-07-25 04:16:26.359230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.320 [2024-07-25 04:16:26.359651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.359684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.320 [2024-07-25 04:16:26.359703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.359942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.360186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.360212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.360228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.363901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 [2024-07-25 04:16:26.373162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.373595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.373627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.373652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.373892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.374134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.374159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.374175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.377742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 [2024-07-25 04:16:26.387004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.387441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.387474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.387494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.387732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.387976] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.388000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.388017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.391583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 [2024-07-25 04:16:26.400843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.401254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.401287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.401306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.401544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.401787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.401812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.401828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.405406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 [2024-07-25 04:16:26.406507] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:11.320 [2024-07-25 04:16:26.406611] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.320 [2024-07-25 04:16:26.414860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.415269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.415301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.415320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.415565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.415808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.415833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.415849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.419416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 [2024-07-25 04:16:26.429071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.429489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.429522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.429541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.429781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.430025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.430050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.430067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.433635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.320 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.320 [2024-07-25 04:16:26.442895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.320 [2024-07-25 04:16:26.443312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.320 [2024-07-25 04:16:26.443345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.320 [2024-07-25 04:16:26.443368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.320 [2024-07-25 04:16:26.443607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.320 [2024-07-25 04:16:26.443850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.320 [2024-07-25 04:16:26.443875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.320 [2024-07-25 04:16:26.443891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.320 [2024-07-25 04:16:26.447018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:11.321 [2024-07-25 04:16:26.447461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.456918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.457320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.457353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.457372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.457610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.457859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.457884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.457900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.461464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.470931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.471346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.471378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.471398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.471637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.471881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.471906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.471922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.475483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.479176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:11.321 [2024-07-25 04:16:26.484972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.485447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.485491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.485512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.485754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.486000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.486025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.486042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.489617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.498887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.499452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.499505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.499526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.499783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.500030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.500055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.500073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.503669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.512914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.513355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.513389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.513408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.513647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.513891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.513915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.513932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.517497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.526741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.527211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.527262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.527283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.527524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.527769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.527794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.527811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.531392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.540672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.541261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.541315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.541338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.541587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.541834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.541859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.541878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.545443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.554699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.555157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.555201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.555221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.555469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.555714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.555739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.555755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.559314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.568560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.569031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.569063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.569090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.569342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.569587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.569612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.569630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.573184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.573473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.321 [2024-07-25 04:16:26.573513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.321 [2024-07-25 04:16:26.573530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.321 [2024-07-25 04:16:26.573545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.321 [2024-07-25 04:16:26.573558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.321 [2024-07-25 04:16:26.573652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.321 [2024-07-25 04:16:26.573710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:11.321 [2024-07-25 04:16:26.573714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.321 [2024-07-25 04:16:26.582489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.583052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.583104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.583125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.583384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.583631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.583656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.583676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.587266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.596538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.597178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.597226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.597258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.597513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.597761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.597787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.597806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.601374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.321 [2024-07-25 04:16:26.610450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.321 [2024-07-25 04:16:26.611054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.321 [2024-07-25 04:16:26.611104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.321 [2024-07-25 04:16:26.611126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.321 [2024-07-25 04:16:26.611387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.321 [2024-07-25 04:16:26.611634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.321 [2024-07-25 04:16:26.611660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.321 [2024-07-25 04:16:26.611679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.321 [2024-07-25 04:16:26.615240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.580 [2024-07-25 04:16:26.624523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.580 [2024-07-25 04:16:26.625060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.580 [2024-07-25 04:16:26.625108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.580 [2024-07-25 04:16:26.625131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.580 [2024-07-25 04:16:26.625394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.580 [2024-07-25 04:16:26.625642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.580 [2024-07-25 04:16:26.625668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.580 [2024-07-25 04:16:26.625688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.580 [2024-07-25 04:16:26.629258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.580 [2024-07-25 04:16:26.638519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.580 [2024-07-25 04:16:26.639015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.580 [2024-07-25 04:16:26.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.580 [2024-07-25 04:16:26.639087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.580 [2024-07-25 04:16:26.639343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.580 [2024-07-25 04:16:26.639589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.580 [2024-07-25 04:16:26.639616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.580 [2024-07-25 04:16:26.639637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.580 [2024-07-25 04:16:26.643195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.580 [2024-07-25 04:16:26.652475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.580 [2024-07-25 04:16:26.653067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.580 [2024-07-25 04:16:26.653125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.580 [2024-07-25 04:16:26.653147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.580 [2024-07-25 04:16:26.653415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.580 [2024-07-25 04:16:26.653663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.580 [2024-07-25 04:16:26.653689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.580 [2024-07-25 04:16:26.653708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.580 [2024-07-25 04:16:26.657273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.580 [2024-07-25 04:16:26.666344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.580 [2024-07-25 04:16:26.666807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.580 [2024-07-25 04:16:26.666842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.580 [2024-07-25 04:16:26.666861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.580 [2024-07-25 04:16:26.667101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.580 [2024-07-25 04:16:26.667356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.580 [2024-07-25 04:16:26.667381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.667399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.670951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 [2024-07-25 04:16:26.680151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.680541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.680571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.680588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.680805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.681035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.681058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.681073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.684372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 [2024-07-25 04:16:26.693758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:11.581 [2024-07-25 04:16:26.694139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.694168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.694184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.694409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.694642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.694665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.694679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.698040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 [2024-07-25 04:16:26.707216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.707995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.708025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.708042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.708274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.708495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.708518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.708548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.711739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.717310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.581 [2024-07-25 04:16:26.720852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.721252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.721282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.721299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.721514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.721742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.721763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.721777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.725055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.734481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.734839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.734868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.734886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.735102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.735332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.735355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.735369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.738650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 [2024-07-25 04:16:26.748111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.748700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.748741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.748761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.748999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.749214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.749235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.749278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 [2024-07-25 04:16:26.752439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 Malloc0 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.761827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.762285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.762316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.762334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.581 [2024-07-25 04:16:26.762565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.581 [2024-07-25 04:16:26.762778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.581 [2024-07-25 04:16:26.762799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.581 [2024-07-25 04:16:26.762813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.766113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.581 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.581 [2024-07-25 04:16:26.775575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.581 [2024-07-25 04:16:26.776003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.581 [2024-07-25 04:16:26.776031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f71b50 with addr=10.0.0.2, port=4420 00:33:11.581 [2024-07-25 04:16:26.776047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71b50 is same with the state(5) to be set 00:33:11.582 [2024-07-25 04:16:26.776271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f71b50 (9): Bad file descriptor 00:33:11.582 [2024-07-25 04:16:26.776391] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.582 [2024-07-25 04:16:26.776490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.582 [2024-07-25 04:16:26.776512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.582 [2024-07-25 04:16:26.776526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.582 [2024-07-25 04:16:26.779924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.582 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.582 04:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 980157 00:33:11.582 [2024-07-25 04:16:26.789094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.582 [2024-07-25 04:16:26.866322] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:21.558 00:33:21.558 Latency(us) 00:33:21.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:21.558 Verification LBA range: start 0x0 length 0x4000 00:33:21.558 Nvme1n1 : 15.01 6345.68 24.79 9136.12 0.00 8241.98 861.68 21845.33 00:33:21.558 =================================================================================================================== 00:33:21.558 Total : 6345.68 24.79 9136.12 0.00 8241.98 861.68 21845.33 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:21.558 rmmod nvme_tcp 00:33:21.558 rmmod nvme_fabrics 00:33:21.558 rmmod nvme_keyring 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 980821 ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 980821 ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 980821' 00:33:21.558 killing process with pid 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 980821 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.558 04:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:23.473 00:33:23.473 real 0m22.412s 00:33:23.473 user 1m0.364s 00:33:23.473 sys 0m4.161s 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.473 ************************************ 00:33:23.473 END TEST nvmf_bdevperf 00:33:23.473 ************************************ 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.473 ************************************ 00:33:23.473 START TEST nvmf_target_disconnect 00:33:23.473 ************************************ 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:23.473 * Looking for test storage... 00:33:23.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.473 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:23.474 04:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:25.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:25.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:25.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:25.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.376 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.377 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.377 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.377 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:25.377 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:25.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:33:25.636 00:33:25.636 --- 10.0.0.2 ping statistics --- 00:33:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.636 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:33:25.636 00:33:25.636 --- 10.0.0.1 ping statistics --- 00:33:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.636 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.636 ************************************ 00:33:25.636 START TEST nvmf_target_disconnect_tc1 00:33:25.636 ************************************ 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.636 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.636 [2024-07-25 04:16:40.836833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.636 [2024-07-25 04:16:40.836901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21533e0 with addr=10.0.0.2, port=4420 00:33:25.636 [2024-07-25 04:16:40.836934] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:25.636 [2024-07-25 04:16:40.836955] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:25.636 [2024-07-25 04:16:40.836975] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:25.636 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:25.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:25.636 Initializing NVMe Controllers 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:25.636 00:33:25.636 real 0m0.096s 00:33:25.636 user 0m0.041s 00:33:25.636 sys 0m0.055s 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:25.636 ************************************ 00:33:25.636 END TEST nvmf_target_disconnect_tc1 00:33:25.636 ************************************ 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.636 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.636 ************************************ 00:33:25.636 START TEST nvmf_target_disconnect_tc2 00:33:25.637 ************************************ 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=983973 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 983973 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 983973 ']' 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.637 04:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.895 [2024-07-25 04:16:40.949932] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:25.895 [2024-07-25 04:16:40.950012] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.895 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.895 [2024-07-25 04:16:40.987229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:25.895 [2024-07-25 04:16:41.019116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:25.895 [2024-07-25 04:16:41.114588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.895 [2024-07-25 04:16:41.114653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.895 [2024-07-25 04:16:41.114669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.895 [2024-07-25 04:16:41.114682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.895 [2024-07-25 04:16:41.114693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.895 [2024-07-25 04:16:41.114789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:25.895 [2024-07-25 04:16:41.115045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:25.895 [2024-07-25 04:16:41.115119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:25.895 [2024-07-25 04:16:41.115128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 Malloc0 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 [2024-07-25 04:16:41.301192] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 [2024-07-25 04:16:41.329461] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=983997 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:26.153 04:16:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:26.153 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.052 04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 983973 00:33:28.052 04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 [2024-07-25 04:16:43.356522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Write completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 [2024-07-25 04:16:43.356852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.325 Read completed with error (sct=0, sc=8) 00:33:28.325 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 [2024-07-25 04:16:43.357190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Write completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 Read completed with error (sct=0, sc=8) 00:33:28.326 starting I/O failed 00:33:28.326 [2024-07-25 04:16:43.357590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.326 [2024-07-25 04:16:43.357864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.357905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.358120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.358339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.358507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.358692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.358854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.358994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.359181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.359367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.359515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.359691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.359871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.359897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.360103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.360254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.360446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.360590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.326 [2024-07-25 04:16:43.360731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.326 qpair failed and we were unable to recover it. 00:33:28.326 [2024-07-25 04:16:43.360885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.360916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.361908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.361934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.362922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.362966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.363143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.363168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.363343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.363370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.363502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.363528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.363650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.363692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.363854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.363882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.364160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.364186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.364345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.364371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.364494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.364535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.364701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.364731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.364924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.364950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.365936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.365978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.366141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.366170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.366336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.366368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.366500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.366526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.366706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.366732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.366879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.366905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.367083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.367108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.367263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.367313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.327 [2024-07-25 04:16:43.367442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.327 [2024-07-25 04:16:43.367468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.327 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.367595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.367621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.367746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.367786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.367977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.368006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.368169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.368198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.368361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.368402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.368546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.368585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.368756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.368800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.368967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.369033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.369192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.369219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.369350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.369384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.369535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.369578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.369857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.369909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.370081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.370131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.370294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.370467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.370505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.370637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.370665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.370814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.370841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.371029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.371065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.371256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.371305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.371433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.371459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.371615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.371657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.371815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.371844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.372286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.372473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.372622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.372809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.372975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.373148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.373305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.373467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.373676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.373873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.373902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.374037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.374066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.374325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.374369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.328 [2024-07-25 04:16:43.374525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.328 [2024-07-25 04:16:43.374552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.328 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.374726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.374773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.374947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.374974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.375147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.375487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.375680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.375843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.375980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.376194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.376389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.376588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.376754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.376933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.376962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.377124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.377153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.377314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.377340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.377493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.377519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.377689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.377716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.377858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.377883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.378006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.378033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.378212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.378246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.378430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.378469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.378674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.378720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.378927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.378983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.379158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.379184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.379327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.379359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.379527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.379555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.379809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.379864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.380074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.380258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.380457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.380612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.380804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.380987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.381185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.381211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.381369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.381396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.381565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.381596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.381782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.381810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.381974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.382003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.329 qpair failed and we were unable to recover it. 00:33:28.329 [2024-07-25 04:16:43.382163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.329 [2024-07-25 04:16:43.382189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.382350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.382376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.382523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.382566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.382822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.382871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.383009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.383035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.383193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.383223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.383385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.383411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.383586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.383612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.383762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.383804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.384051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.384076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.384234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.384428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.384454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.384601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.384627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.384815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.384844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.385955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.385984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.386122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.386151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.386352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.386528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.386554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.386675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.386701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.386930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.387158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.387187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.387363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.387390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.387515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.387541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.387693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.387719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.330 [2024-07-25 04:16:43.387845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.330 [2024-07-25 04:16:43.387870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.330 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.388937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.389129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.389158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.389325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.389544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.389572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.389714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.389739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.389859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.389885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.390821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.390865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.391876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.391902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.392957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.392987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.393158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.393184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.393349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.393388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.393547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.393574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.393693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.393719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.331 [2024-07-25 04:16:43.393948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.331 [2024-07-25 04:16:43.393974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.331 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.394200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.394228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.394391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.394417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.394630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.394683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.394875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.394901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.395050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.395076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.395223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.395258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.395388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.395415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.395593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.395870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.395920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.396067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.396093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.396290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.396335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.396481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.396507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.396661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.396687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.396832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.396858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.397067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.397250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.397409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.397620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.397821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.397992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.398181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.398377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.398601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.398745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.398937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.398981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.399120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.399146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.399315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.399347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.399523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.399549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.399705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.399748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.399971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.400026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.400191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.400218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.400432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.400460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.400712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.400763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.400960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.401004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.332 [2024-07-25 04:16:43.401153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.332 [2024-07-25 04:16:43.401179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.332 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.401332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.401358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.401502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.401528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.401685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.401711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.401934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.401986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.402162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.402189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.402359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.402403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.402572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.402621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.402798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.402825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.402997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.403040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.403183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.403209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.403390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.403417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.403567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.403593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.403758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.403802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.403985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.404185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.404348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.404521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.404694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.404845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.404871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.405907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.405933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.406100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.406126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.406272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.406299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.406444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.406487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.406655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.406685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.406850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.406878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.407026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.407055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.407175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.407200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.407355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.407383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.333 qpair failed and we were unable to recover it. 00:33:28.333 [2024-07-25 04:16:43.407601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.333 [2024-07-25 04:16:43.407625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.407755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.407895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.407919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.408068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.408094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.408251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.408279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.408466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.408491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.408744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.408769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.408892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.408918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.409091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.409120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.409283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.409321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.409505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.409553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.409693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.409737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.409905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.409995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.410137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.410168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.410315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.410360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.410516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.410542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.410731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.410757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.410908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.410934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.411962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.411990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.412132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.412159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.412344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.412393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.412559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.412588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.412877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.412928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.413095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.413124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.413300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.413326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.413450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.413476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.413639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.413667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.413803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.413832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.414023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.414051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.414210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.334 [2024-07-25 04:16:43.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.334 qpair failed and we were unable to recover it. 00:33:28.334 [2024-07-25 04:16:43.414422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.414447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.414562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.414603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.414763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.414792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.414946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.414974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.415160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.415188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.415354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.415379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.415533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.415558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.415766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.415811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.415959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.415985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.416134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.416162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.416358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.416385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.416530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.416573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.416741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.416771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.416968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.416994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.417169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.417194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.417354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.417380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.417528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.417553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.417714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.417742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.417906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.417931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.418960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.418986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.419158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.419311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.419463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.419661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.419834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.419976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.335 [2024-07-25 04:16:43.420018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.335 qpair failed and we were unable to recover it. 00:33:28.335 [2024-07-25 04:16:43.420226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.420262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.420406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.420431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.420579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.420604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.420742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.420770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.420977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.421836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.421998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.422027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.422186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.422215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.422427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.422466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.422625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.422654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.422821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.422865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.423069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.423130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.423308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.423336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.423503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.423546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.423763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.423815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.424836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.424864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.425024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.425052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.425238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.425273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.425474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.425500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.425626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.425652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.425807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.425833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.426035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.426063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.426226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.426257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.426405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.426431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.426646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.336 [2024-07-25 04:16:43.426700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.336 qpair failed and we were unable to recover it. 00:33:28.336 [2024-07-25 04:16:43.426889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.426915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.427931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.427975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.428125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.428151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.428268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.428299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.428475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.428520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.428725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.428768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.429063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.429126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.429294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.429325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.429553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.429579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.429753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.429780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.429962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.429989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.430113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.430139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.430312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.430338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.430484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.430531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.430702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.430745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.430949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.431157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.431361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.431556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.431768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.431972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.431998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.432166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.432192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.432358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.432402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.432567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.432611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.432751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.432777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.432897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.432925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.433094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.433120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.433332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.433363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.433540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.433581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.433757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.433784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.434061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.337 [2024-07-25 04:16:43.434113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.337 qpair failed and we were unable to recover it. 00:33:28.337 [2024-07-25 04:16:43.434292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.434322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.434501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.434527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.434673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.434849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.434875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.435047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.435076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.435263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.435291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.435457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.435486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.435686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.435713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.435895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.435922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.436102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.436128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.436309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.436354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.436519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.436546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.436732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.436912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.436938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.437950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.437978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.438118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.438146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.438297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.438324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.438463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.438508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.438698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.438723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.438855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.438881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.439951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.439978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.440134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.440160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.440306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.440349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.440516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.440542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.440689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.440715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.440913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.440973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.441166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.338 [2024-07-25 04:16:43.441195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.338 qpair failed and we were unable to recover it. 00:33:28.338 [2024-07-25 04:16:43.441396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.441423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.441646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.441703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.441893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.441922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.442064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.442093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.442238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.442270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.442422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.442448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.442627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.442656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.442817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.442845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.443010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.443038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.443202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.443230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.443415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.443441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.443579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.443607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.443815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.443840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.444905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.444935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.445188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.445217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.445417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.445444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.445611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.445638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.445819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.445847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.446971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.446996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.447169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.447381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.447407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.447539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.447564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.447708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.447733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.447893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.339 [2024-07-25 04:16:43.447922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.339 qpair failed and we were unable to recover it. 00:33:28.339 [2024-07-25 04:16:43.448062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.448089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.448304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.448330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.448513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.448538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.448685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.448711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.448839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.448864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.449877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.449999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.450149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.450309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.450480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.450630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.450802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.450847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.451955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.451980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.452161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.452335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.452499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.452689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.452878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.452999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.453025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.453154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.340 [2024-07-25 04:16:43.453179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.340 qpair failed and we were unable to recover it. 00:33:28.340 [2024-07-25 04:16:43.453352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.453379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.453540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.453568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.453711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.453736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.453852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.453877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.454940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.454965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.455925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.455952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.456128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.456303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.456516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.456674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.456852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.456992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.457171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.457375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.457524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.457692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.457910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.457935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.458879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.458922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.459075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.459101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.459273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.459302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.341 [2024-07-25 04:16:43.459480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.341 [2024-07-25 04:16:43.459507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.341 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.459682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.459706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.459822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.459846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.459996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.460200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.460395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.460599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.460746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.460918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.460943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.461121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.461147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.461304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.461331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.461499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.461527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.461664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.461696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.461866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.461891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.462858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.462883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.463829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.463854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.464008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.464033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.464246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.464275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.464423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.464448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.464633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.464662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.464798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.464826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.342 qpair failed and we were unable to recover it. 00:33:28.342 [2024-07-25 04:16:43.465000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.342 [2024-07-25 04:16:43.465026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.465930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.465955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.466141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.466170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.466365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.466395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.466576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.466603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.466755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.466780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.466923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.466964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.467145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.467171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.467321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.467350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.467506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.467534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.467680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.467706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.467860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.467901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.468086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.468277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.468631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.468857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.468997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.469029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.469188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.469217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.469405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.469432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.469599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.469627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.469819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.469847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.470048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.470220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.470432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.470651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.470845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.470994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.471020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.471150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.471174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.471311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.343 qpair failed and we were unable to recover it. 00:33:28.343 [2024-07-25 04:16:43.471506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.343 [2024-07-25 04:16:43.471534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.471711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.471736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.471908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.471934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.472875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.472916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.473312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.473478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.473696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.473841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.473987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.474173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.474408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.474576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.474747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.474919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.474944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.475909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.475934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.476107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.476319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.476517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.476687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.476831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.476980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.477125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.477375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.477544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.477693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.477870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.477896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.478045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.344 [2024-07-25 04:16:43.478071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-07-25 04:16:43.478222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.478274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.478444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.478475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.478642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.478669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.478790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.478834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.479949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.479973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.480124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.480149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.480292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.480318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.480466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.480491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.480706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.480732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.480852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.480878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.481889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.481915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.482059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.482084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.482231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.482269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.482396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.482422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.482612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.482641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.482820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.482870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.483063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.483088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.483230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.483263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.483411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.483454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.483627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.483652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.483822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.483847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.484109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.484158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.484314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.345 [2024-07-25 04:16:43.484340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-07-25 04:16:43.484508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.484659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.484687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.484823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.484849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.485952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.485976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.486170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.486200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.486381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.486408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.486528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.486558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.486709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.486734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.486880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.486908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.487112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.487138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.487310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.487339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.487479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.487507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.487671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.487697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.487848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.487874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.488010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.488038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.488234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.488271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.488415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.488441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.488588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.488614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.488822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.488848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.489961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.489987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.490129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.490154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.490326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.490368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.490504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.490669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.490694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.490899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.490924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.491097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.491122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.491267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.491310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.491445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.491472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-07-25 04:16:43.491643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.346 [2024-07-25 04:16:43.491668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.491823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.491847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.491999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.492042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.492182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.492207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.492388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.492430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.492558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.492587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.492785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.492810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.492960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.493198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.493356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.493548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.493743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.493942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.493968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.494109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.494138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.494308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.494338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.494534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.494559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.494686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.494711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.494852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.494878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.495875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.495900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.496860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.496886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.497122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.497148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.497336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.497363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.497489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.497516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.497691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.497716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.497868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.497894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.498069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.498094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.498256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.498282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.498445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.347 [2024-07-25 04:16:43.498474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.347 qpair failed and we were unable to recover it. 00:33:28.347 [2024-07-25 04:16:43.498635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.498665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.498833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.498859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.499055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.499274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.499450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.499615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.499848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.499986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.500923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.500950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.501101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.501127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.501318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.501358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.501556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.501584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.501744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.501790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.501958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.501985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.502115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.502141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.502265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.502292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.502466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.502509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.502678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.502705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.502857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.502884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.503070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.503097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.503261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.503287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.503464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.503490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.503671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.503697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.503855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.503881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.504966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.504992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.505144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.505170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.505396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.505436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.505595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.348 [2024-07-25 04:16:43.505622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.348 qpair failed and we were unable to recover it. 00:33:28.348 [2024-07-25 04:16:43.505762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.505793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.505950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.505980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.506972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.506999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.507175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.507202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.507345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.507375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.507546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.507573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.507720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.507898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.507925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.508078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.508105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.508289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.508316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.508506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.508535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.508680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.508707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.508853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.508880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.509897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.509923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.510072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.510254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.510448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.510649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.510817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.510981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.511009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.349 qpair failed and we were unable to recover it. 00:33:28.349 [2024-07-25 04:16:43.511204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.349 [2024-07-25 04:16:43.511233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.511431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.511460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.511631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.511659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.511861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.511891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.512953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.512980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.513130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.513156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.513346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.513373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.513521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.513548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.513721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.513750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.513971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.514085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.514112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.514294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.514324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.514469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.514495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.514618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.514645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.514789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.514815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.515944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.515973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.516167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.350 [2024-07-25 04:16:43.516194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.350 qpair failed and we were unable to recover it. 00:33:28.350 [2024-07-25 04:16:43.516318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.516372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.516585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.516611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.516753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.516785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.516952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.516981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.517170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.517200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.517401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.517428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.517551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.517578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.517728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.517755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.517875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.517901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.518071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.518285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.518482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.518648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.518845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.518977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.519004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.519181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.519223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.519418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.519448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.519646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.519672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.519839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.519869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.520025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.520055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.520191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.520218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.520431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.520474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.520634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.520663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.520787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.520814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.521054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.521105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.521268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.521297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.521463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.521489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.521608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.521651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.521842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.521870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.522047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.522073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.522253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.522293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.351 [2024-07-25 04:16:43.522422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.351 [2024-07-25 04:16:43.522450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.351 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.522601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.522630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.522795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.522856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.523083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.523135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.523312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.523488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.523515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.523688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.523719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.523875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.523902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.524071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.524114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.524288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.524319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.524485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.524511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.524692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.524727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.524891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.524922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.525086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.525113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.525279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.525310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.525469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.525498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.525642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.525668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.525822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.525849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.526019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.526049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.526248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.526292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.526433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.526459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.526640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.526667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.526842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.526868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.527086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.527139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.527275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.527305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.527488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.527514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.527629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.527673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.352 qpair failed and we were unable to recover it. 00:33:28.352 [2024-07-25 04:16:43.527835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.352 [2024-07-25 04:16:43.527886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.528055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.528082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.528228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.528277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.528440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.528471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.528666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.528693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.528925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.528978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.529144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.529175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.529346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.529374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.529528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.529554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.529703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.529747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.529922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.529949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.530957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.530983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.531170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.531199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.531343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.531373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.531508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.531534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.531707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.531736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.531953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.532146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.532371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.532537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.532715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.532858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.532884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.533083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.533112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.353 [2024-07-25 04:16:43.533250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.353 [2024-07-25 04:16:43.533276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.353 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.533440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.533468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.533599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.533630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.533775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.533802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.533996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.534025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.534195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.534220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.534378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.534405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.534562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.534591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.534783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.534812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.534966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.535171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.535382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.535527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.535663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.535958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.535988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.536161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.536187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.536384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.536413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.536550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.536579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.536785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.536811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.536960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.536985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.537136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.537181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.537374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.537401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.537563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.537592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.537764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.537793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.537966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.537992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.538139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.538164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.354 [2024-07-25 04:16:43.538313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.354 [2024-07-25 04:16:43.538356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.354 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.538497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.538524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.538650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.538676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.538826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.538855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.539025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.539051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.539207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.539234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.539418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.539446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.539625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.539651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.539775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.539818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.540954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.540981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.541141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.541170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.541341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.541367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.541528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.541557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.541721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.541750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.541893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.541919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.542963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.542989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.543160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.543186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.543371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.543516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.543543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.543698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.355 [2024-07-25 04:16:43.543738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.355 qpair failed and we were unable to recover it. 00:33:28.355 [2024-07-25 04:16:43.543864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.543893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.544923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.544953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.545126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.545153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.545307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.545334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.545478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.545503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.545623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.545650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.545819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.545847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.546899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.546941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.547098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.547127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.547303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.547330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.547522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.547556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.547745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.547775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.547945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.547972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.548120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.548165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.548368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.548404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.548551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.548578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.548710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.548754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.548886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.548916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.549119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.549322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.549492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.549680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.549825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.549995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.550201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.550228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.550406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.550435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.550597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.550626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.550783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.356 [2024-07-25 04:16:43.550810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.356 qpair failed and we were unable to recover it. 00:33:28.356 [2024-07-25 04:16:43.550975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.551202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.551379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.551572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.551741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.551940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.551967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.552121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.552294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.552467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.552639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.552799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.552993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.553181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.553358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.553561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.553741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.553931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.553960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.554104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.554130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.554275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.554318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.554480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.554509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.554681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.554708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.554824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.554867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.555089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.555281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.555485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.555671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.555838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.555984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.556136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.556302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.556472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.556697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.556853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.556878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.557067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.557097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.557221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.557258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.557426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.557451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.557621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.557650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.557815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.557844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.558011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.558036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.558180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.558206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.357 qpair failed and we were unable to recover it. 00:33:28.357 [2024-07-25 04:16:43.558323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.357 [2024-07-25 04:16:43.558350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.558496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.558522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.558715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.558745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.558917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.558943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.559913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.559939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.560103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.560142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.560267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.560295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.560468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.560495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.560663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.560707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.560840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.561052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.561095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.561308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.561336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.561574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.561617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.561823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.561866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.562039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.562083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.562228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.562263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.562433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.562479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.562678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.562722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.562851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.562878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.563936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.563963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.564100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.564130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.564301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.564328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.564475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.564500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.564703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.564731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.564892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.564920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.565082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.565110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.565270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.565309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.565470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.565497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.565679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.565722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.565920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.565969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.566146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.358 [2024-07-25 04:16:43.566173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.358 qpair failed and we were unable to recover it. 00:33:28.358 [2024-07-25 04:16:43.566324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.566350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.566487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.566516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.566735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.566778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.566951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.566995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.567151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.567179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.567307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.567334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.567463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.567489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.567651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.567680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.567904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.567952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.568093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.568121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.568321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.568346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.568491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.568532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.568694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.568722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.568885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.568913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.569101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.569130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.569312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.569338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.569459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.569484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.569645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.569674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.569868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.569893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.570064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.570091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.570291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.570317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.570469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.570495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.570710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.570739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.570958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.571173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.571340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.571491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.571683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.571901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.571951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.572129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.572322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.572477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.572673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.572836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.572995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.573177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.573364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.573514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.573734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.573887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.573915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.574107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.574134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.574312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.359 [2024-07-25 04:16:43.574338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.359 qpair failed and we were unable to recover it. 00:33:28.359 [2024-07-25 04:16:43.574485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.574511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.574661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.574687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.574856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.574884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.575077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.575105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.575285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.575311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.575461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.575487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.575684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.575713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.575894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.575927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.576118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.576146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.576301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.576497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.576523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.576690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.576718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.576887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.576916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.577140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.577168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.577367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.577393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.577551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.577580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.577790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.577839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.577997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.578026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.578188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.578217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.578409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.578447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.578602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.578630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.578780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.578824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.579087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.579136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.579256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.579283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.579459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.579484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.579622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.579651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.579922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.579972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.580094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.580121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.580279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.580322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.580482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.580511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.580649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.580677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.580895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.580946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.581959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.360 [2024-07-25 04:16:43.581986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.360 qpair failed and we were unable to recover it. 00:33:28.360 [2024-07-25 04:16:43.582170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.582198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.582346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.582372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.582511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.582542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.582758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.582802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.582937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.582981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.583122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.583148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.583319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.583346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.583491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.583517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.583667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.583693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.583862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.583907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.584093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.584297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.584475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.584640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.584832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.584992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.585170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.585346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.585549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.585747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.585944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.585972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.586187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.586382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.586536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.586685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.586854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.586991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.587205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.587379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.587532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.587721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.587905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.587933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.588155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.588184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.588373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.588539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.588565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.588714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.588741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.588939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.588967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.589130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.589159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.589323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.589349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.589499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.589525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.589673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.589716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.589913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.589973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.590132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.590159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.590351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.590376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.361 qpair failed and we were unable to recover it. 00:33:28.361 [2024-07-25 04:16:43.590519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.361 [2024-07-25 04:16:43.590545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.590689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.590718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.590885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.590913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.591133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.591160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.591333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.591359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.591480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.591507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.591693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.591718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.591864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.591888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.592031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.592059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.592233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.592266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.592416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.592441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.592610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.592638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.592839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.592902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.593086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.593295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.593447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.593611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.593801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.593992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.594072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.594264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.594307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.594470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.594501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.594657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.594682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.594867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.594895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.595034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.595062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.595228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.595259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.595433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.595458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.595684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.595709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.595925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.595977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.596114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.596142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.596311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.596338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.596486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.596512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.596621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.596662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.596830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.596858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.597919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.597943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.598115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.598148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.598319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.598346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.598493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.598518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.598694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.362 [2024-07-25 04:16:43.598719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.362 qpair failed and we were unable to recover it. 00:33:28.362 [2024-07-25 04:16:43.598899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.598927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.599092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.599121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.599262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.599289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.599417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.599443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.599649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.599695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.599922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.599951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.600111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.600140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.600267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.600318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.600439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.600465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.600617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.600643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.600789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.600817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.601947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.601975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.602138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.602168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.602345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.602372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.602557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.602586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.602861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.602910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.603125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.603329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.603478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.603630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.603828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.603984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.604177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.604404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.604561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.604760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.604924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.604953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.605121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.605152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.605333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.605360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.605487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.605513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.605715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.605744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.363 [2024-07-25 04:16:43.605897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.363 [2024-07-25 04:16:43.605926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.363 qpair failed and we were unable to recover it. 00:33:28.364 [2024-07-25 04:16:43.606089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.364 [2024-07-25 04:16:43.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.364 qpair failed and we were unable to recover it. 00:33:28.364 [2024-07-25 04:16:43.606330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.364 [2024-07-25 04:16:43.606370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.364 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.606520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.606548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.606692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.606737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.606902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.606929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.607893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.607920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.608916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.608945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.609161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.609207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.609370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.609397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.609543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.609573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.609735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.609781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-07-25 04:16:43.609947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-07-25 04:16:43.609996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.610125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.610151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.610283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.610311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.610462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.610489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.610614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.610642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.610817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.610847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.611037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.611092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.611276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.611305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.611497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.611527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.611679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.611705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.611933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.612136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.612163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.612295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.612323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.612452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.612478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.612768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.612820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.612949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.612978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.613137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.613167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.613339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.613366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.613510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.613558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.613698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.613727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.613895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.613924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.614088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.614327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.614499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.614669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.614837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.614990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.615163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.615360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.615530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.615729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.615896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.615926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.616094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.616123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.616310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.616464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.616491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.616637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.616664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.616815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.616841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-07-25 04:16:43.617007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-07-25 04:16:43.617036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.617179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.617206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.617338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.617365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.617490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.617517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.617695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.617721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.617912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.617941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.618081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.618111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.618259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.618287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.618431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.618459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.618606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.618636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.618828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.618883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.619958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.619987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.620139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.620166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.620343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.620370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.620497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.620524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.620647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.620690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.620878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.621935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.621965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.622158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.622343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.622370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.622517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.622550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.622732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.622758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.622896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.622926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.623082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.623258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.623412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.623602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-07-25 04:16:43.623773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-07-25 04:16:43.623891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.623934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.624096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.624126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.624297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.624324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.624476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.624503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.624651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.624681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.624873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.624904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.625077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.625289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.625616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.625786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.625963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.626135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.626314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.626464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.626664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.626816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.626845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.627010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.627039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.627169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.627198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.627446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.627485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.627667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.627694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.627866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.627895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.628106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.628153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.628304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.628331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.628491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.628533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.628723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.628749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.628993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.629039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.629226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.629264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.629410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.629437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.629588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.629614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.629783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.629812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.629985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.630174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.630385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.630532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.630674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.630846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.630888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.631073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.649 [2024-07-25 04:16:43.631102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.649 qpair failed and we were unable to recover it. 00:33:28.649 [2024-07-25 04:16:43.631254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.631427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.631452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.631647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.631675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.631874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.631900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.632952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.632978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.633146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.633506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.633661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.633839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.633991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.634134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.634370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.634508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.634678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.634856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.634886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.635932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.635963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.636128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.636158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.636311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.636337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.636488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.636514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.636661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.636687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.636883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.636911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.637072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.637100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.637250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.637277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.637433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.637460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.637624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.637657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.650 qpair failed and we were unable to recover it. 00:33:28.650 [2024-07-25 04:16:43.637807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.650 [2024-07-25 04:16:43.637833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.637948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.637974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.638096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.638122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.638269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.638295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.638415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.638441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.638602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.638630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.638801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.638827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.639960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.639985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.640114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.640139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.640325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.640351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.640528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.640554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.640741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.640769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.640930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.640957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.641131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.641158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.641305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.641331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.641452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.641477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.641599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.641625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.651 qpair failed and we were unable to recover it. 00:33:28.651 [2024-07-25 04:16:43.641811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.651 [2024-07-25 04:16:43.641840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.641978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.642194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.642373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.642525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.642708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.642859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.642883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.643962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.643987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.644129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.644155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.644280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.644306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.644451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.644476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.644672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.644700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.644857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.644883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.645038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.645064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.645233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.645269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.645435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.645460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.645619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.645646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.645892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.645941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.646113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.646139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.646256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.646281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.646405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.646431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.646561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.646587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.652 qpair failed and we were unable to recover it. 00:33:28.652 [2024-07-25 04:16:43.646729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.652 [2024-07-25 04:16:43.646755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.646904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.646929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.647130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.647307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.647482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.647679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.647850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.647974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.648151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.648330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.648504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.648722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.648933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.648962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.649117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.649145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.649292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.649318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.649444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.649470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.649617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.649642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.649854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.649879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.650049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.650083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.650228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.650263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.650465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.650492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.650651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.650678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.650836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.650864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.651937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.651964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.652106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.653 [2024-07-25 04:16:43.652148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.653 qpair failed and we were unable to recover it. 00:33:28.653 [2024-07-25 04:16:43.652312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.652339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.652460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.652499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.652622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.652648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.652817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.652860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.653049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.653078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.653222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.653257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.653451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.653476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.653704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.653729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.653914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.653943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.654107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.654313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.654466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.654655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.654866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.654983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.655126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.655310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.655508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.655711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.655908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.655938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.656111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.656141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.656300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.656330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.656507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.656533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.656673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.656716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.656859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.656888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.657076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.657252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.657431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.654 [2024-07-25 04:16:43.657790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.654 qpair failed and we were unable to recover it. 00:33:28.654 [2024-07-25 04:16:43.657928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.657953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.658083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.658109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.658282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.658311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.658487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.658619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.658645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.658851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.658877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.659026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.659052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.659197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.659223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.659390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.659417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.659591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.659620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.659818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.659883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.660077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.660276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.660440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.660635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.660821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.660973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.661162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.661357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.661493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.661710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.661863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.661888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.662050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.662077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.662240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.662277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.662483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.662509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.662674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.662703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.662867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.662899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.663034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.663062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.663226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.663257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.663428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.663458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.655 [2024-07-25 04:16:43.663631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.655 [2024-07-25 04:16:43.663668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.655 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.663887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.663937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.664115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.664141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.664272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.664317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.664481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.664521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.664713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.664742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.664918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.664944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.665107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.665136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.665291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.665320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.665474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.665503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.665682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.665885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.665910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.666056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.666083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.666253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.666291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.666458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.666484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.666652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.666681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.666880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.666906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.667027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.667053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.667235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.667266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.667437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.667467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.667665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.667714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.667918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.667944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.668060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.668087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.668229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.668261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.668449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.668478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.668636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.668664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.668835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.668861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.669011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.669036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.656 [2024-07-25 04:16:43.669198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.656 [2024-07-25 04:16:43.669227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.656 qpair failed and we were unable to recover it. 00:33:28.657 [2024-07-25 04:16:43.669403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.657 [2024-07-25 04:16:43.669441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.657 qpair failed and we were unable to recover it. 00:33:28.657 [2024-07-25 04:16:43.669603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.657 [2024-07-25 04:16:43.669632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.657 qpair failed and we were unable to recover it. 00:33:28.657 [2024-07-25 04:16:43.669788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.657 [2024-07-25 04:16:43.669815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.657 qpair failed and we were unable to recover it. 00:33:28.657 [2024-07-25 04:16:43.669937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.657 [2024-07-25 04:16:43.669963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.657 qpair failed and we were unable to recover it. 00:33:28.657 [2024-07-25 04:16:43.670111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.657 [2024-07-25 04:16:43.670137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.657 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.670297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.670325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.670444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.670471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.670624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.670650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.670891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.671073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.671106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.671236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.671281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.671430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.671472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.671661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.671708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.671878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.671904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.672062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.672459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.672661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.672836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.672986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.673012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.673162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.673205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.673410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.673436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.673599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.673625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.673771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.658 [2024-07-25 04:16:43.673822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.658 qpair failed and we were unable to recover it. 00:33:28.658 [2024-07-25 04:16:43.674046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.674095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.674288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.674315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.674464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.674490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.674644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.674672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.674814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.674843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.675018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.675044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.675197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.675222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.675416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.675460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.675669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.675704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.675845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.675872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.676055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.676468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.676664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.676842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.676971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.677159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.677314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.677490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.677696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.677927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.677979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.678144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.678170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.678333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.678361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.678533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.678563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.678731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.678758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.678903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.678931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.679108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.679135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.679262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.659 [2024-07-25 04:16:43.679290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.659 qpair failed and we were unable to recover it. 00:33:28.659 [2024-07-25 04:16:43.679461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.679488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.679609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.679637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.679812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.679848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.679958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.679984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.680150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.680176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.680296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.680324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.680451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.680478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.680683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.680713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.680910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.680936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.681960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.681987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.682121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.682151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.682287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.682317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.682482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.682508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.682641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.682668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.682817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.682844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.683030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.683215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.683391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.683592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.683818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.683992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.684020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.684176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.684203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.660 qpair failed and we were unable to recover it. 00:33:28.660 [2024-07-25 04:16:43.684371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.660 [2024-07-25 04:16:43.684399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.684547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.684573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.684701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.684727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.684847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.684872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.685859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.685901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.686077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.686103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.686255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.686282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.686435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.686462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.686643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.686700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.686842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.686868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.687868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.687895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.688041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.688218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.688418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.688589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.661 [2024-07-25 04:16:43.688807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.661 qpair failed and we were unable to recover it. 00:33:28.661 [2024-07-25 04:16:43.688952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.688980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.689129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.689156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.689315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.689342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.689517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.689544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.689660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.689688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.689847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.689874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.690092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.690318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.690487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.690637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.690825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.690983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.691144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.691300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.691463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.691682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.691860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.691887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.692841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.692867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.693011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.693046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.693205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.693233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.693371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.693397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.662 [2024-07-25 04:16:43.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.662 [2024-07-25 04:16:43.693535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.662 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.693685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.693710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.693831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.693856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.693974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.693999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.694130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.694154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.694306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.694332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.694463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.694488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.694631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.694655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.694822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.694866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.695084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.695119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.695246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.695274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.695411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.695441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.696594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.696624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.696885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.696912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.697732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.697777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.697934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.697961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.698081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.698107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.698303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.698329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.698451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.698478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.698600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.698782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.698808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.663 [2024-07-25 04:16:43.699920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.663 [2024-07-25 04:16:43.699949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.663 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.700894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.700921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.701937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.701966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.702952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.702977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.703117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.703142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.703266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.703297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.664 [2024-07-25 04:16:43.703423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.664 [2024-07-25 04:16:43.703449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.664 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.703577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.703603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.704315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.704344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.704493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.704520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.704659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.704692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.704839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.704868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.704999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.705198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.705357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.705514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.705686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.705854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.705880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.706877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.706902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.707858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.707884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.708005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.708029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.708156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.708182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.708308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.665 [2024-07-25 04:16:43.708335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.665 qpair failed and we were unable to recover it. 00:33:28.665 [2024-07-25 04:16:43.708463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.708643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.708668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.708815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.708840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.708965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.708990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.709132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.709157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.709292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.709322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.709472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.709498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.709656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.709699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.709857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.709885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.710962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.710988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.711126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.711152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.711284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.711311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.711455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.711481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.711696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.711721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.711948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.711977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.666 [2024-07-25 04:16:43.712888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.666 [2024-07-25 04:16:43.712913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.666 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.713855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.713881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.714880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.714906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.715840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.715973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.716904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.716930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.667 [2024-07-25 04:16:43.717051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.667 [2024-07-25 04:16:43.717078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.667 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.717203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.717229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.717371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60a470 is same with the state(5) to be set 00:33:28.668 [2024-07-25 04:16:43.717524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.717569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.717733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.717761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.717908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.717934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.718119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.718145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.718314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.718491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.718530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.718726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.718764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.718891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.718917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.719874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.719902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.720970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.720995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.721137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.721164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.721346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.721373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.721485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.721511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.668 qpair failed and we were unable to recover it. 00:33:28.668 [2024-07-25 04:16:43.721688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.668 [2024-07-25 04:16:43.721714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.721853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.721880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.721999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.722948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.722974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.723925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.723951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.724908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.724934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.725073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.725099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.725230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.725264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.725387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.725415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.725555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.725593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.725771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.725822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.669 [2024-07-25 04:16:43.726002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.669 [2024-07-25 04:16:43.726036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.669 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.726183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.726210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.726337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.726365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.726492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.726519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.726684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.726711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.726927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.726972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.727130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.727156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.727297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.727325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.727458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.727494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.727678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.727709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.727918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.727970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.728926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.728952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.729914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.729940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.730082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.730107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.730240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.730283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.730407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.730433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.730560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.670 [2024-07-25 04:16:43.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.670 qpair failed and we were unable to recover it. 00:33:28.670 [2024-07-25 04:16:43.730708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.730734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.730861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.730886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.731937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.731963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.732882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.732907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.671 [2024-07-25 04:16:43.733834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.671 qpair failed and we were unable to recover it. 00:33:28.671 [2024-07-25 04:16:43.733954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.733980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.734937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.734964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.735137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.735162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.735289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.735316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.735431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.735457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.735611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.735637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.735816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.735845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.736904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.736930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.737860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.737976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.672 [2024-07-25 04:16:43.738002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.672 qpair failed and we were unable to recover it. 00:33:28.672 [2024-07-25 04:16:43.738132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.738297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.738440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.738590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.738769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.738941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.738966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.739164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.739206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.739360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.739386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.739500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.739526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.739695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.739721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.739855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.739884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.740072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.740282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.740431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.740639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.740849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.740998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.741195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.741382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.741539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.741731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.741920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.741948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.742113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.742139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.742283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.742320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.742442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.742467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.742602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.742629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.673 [2024-07-25 04:16:43.742758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.673 [2024-07-25 04:16:43.742784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.673 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.742966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.742995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.743134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.743160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.743289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.743315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.743487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.743517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.743650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.743804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.743830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.744925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.744950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.745126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.745318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.745495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.745641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.745820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.745969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.746870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.746983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.747009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.747127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.747153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.674 [2024-07-25 04:16:43.747287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.674 [2024-07-25 04:16:43.747314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.674 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.747463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.747488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.747608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.747634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.747776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.747802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.747916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.747942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.748904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.748929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.749915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.749941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.750943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.750984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.751187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.751215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.751380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.751406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.751528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.751554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.751748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.751773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.751936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.751965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.752130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.752318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.752467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.752805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.752979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.753005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.675 qpair failed and we were unable to recover it. 00:33:28.675 [2024-07-25 04:16:43.753146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.675 [2024-07-25 04:16:43.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.753296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.753322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.753492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.753518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.753717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.753745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.753919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.753945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.754119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.754145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.754326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.754352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.754505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.754546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.754719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.754746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.754892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.754918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.755929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.755957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.756116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.756144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.756287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.756314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.756456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.756482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.756635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.756664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.756851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.756903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.757088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.757116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.757254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.757297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.757470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.757496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.757637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.757666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.757843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.757871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.758968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.758996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.759230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.759267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.759425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.759450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.759567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.759593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.759738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.759953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.759982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.760120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.760149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.760336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.676 [2024-07-25 04:16:43.760362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.676 qpair failed and we were unable to recover it. 00:33:28.676 [2024-07-25 04:16:43.760479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.760505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.760654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.760859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.760887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.761044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.761073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.761205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.761233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.761404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.761430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.761579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.761621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.761782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.761811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.762904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.762933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.763171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.763202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.763327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.763352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.763501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.763538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.763655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.763681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.763835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.763861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.764005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.764030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.764186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.764211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.764369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.764516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.764542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.764715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.764756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.765592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.765625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.765809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.765839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.766092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.766122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.766326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.766353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.766513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.766555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.766729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.766755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.766899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.766924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.767092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.767122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.677 [2024-07-25 04:16:43.767317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.677 [2024-07-25 04:16:43.767355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.677 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.767479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.767630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.767656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.767826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.767854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.767990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.768154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.768314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.768470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.768619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.768796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.768829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.769086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.769263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.769483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.769672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.769840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.769978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.770199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.770387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.770530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.770716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.770868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.770894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.771066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.771234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.771434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.771620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.771805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.771990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.772035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.772207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.772236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.772393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.772420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.772568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.772602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.772786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.772815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.772982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.773181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.773337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.773488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.773669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.773866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.773894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.774065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.774094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.678 [2024-07-25 04:16:43.774299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.678 [2024-07-25 04:16:43.774339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.678 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.774459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.774486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.774613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.774641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.774809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.774856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.774988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.775153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.775376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.775583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.775775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.775949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.775975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.776136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.776162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.776303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.776330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.776454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.776480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.776639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.776680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.776844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.776872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.777910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.777935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.778922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.778965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.779119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.779148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.779315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.779341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.779469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.779494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.779651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.779677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.779835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.779864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.780865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.780891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-07-25 04:16:43.781038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.679 [2024-07-25 04:16:43.781081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.781260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.781310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.781433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.781459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.781632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.781660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.781803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.781845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.781975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.782829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.782993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.783178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.783373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.783516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.783707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.783911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.783939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.784101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.784300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.784456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.784599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.784775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.784979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.785148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.785329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.785498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.785664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.785881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.785908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.786075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.786123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.786281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.786308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.786450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.786495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.786671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.786719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.786867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.786913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.787090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.787116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.787237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.787270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.787415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.787459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.787635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.787879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.787905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.788058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.680 [2024-07-25 04:16:43.788084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.680 qpair failed and we were unable to recover it. 00:33:28.680 [2024-07-25 04:16:43.788233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.788265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.788396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.788422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.788594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.788638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.788836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.788863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.789011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.789037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.789211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.789238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.789397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.789441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.789613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.789656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.789830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.789873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.790019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.790045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.790198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.790225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.790400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.790444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.790603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.790634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.790780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.790809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.791013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.791046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.791234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.791276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.791444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.791486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.791660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.791711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.791872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.791901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.792038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.792067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.792268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.792296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.792444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.792470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.792617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.792661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.792890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.792936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.793083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.793129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.793306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.793333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.793482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.793511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.793684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.793892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.793920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.794073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.794101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.794290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.794317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.794432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.794458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.681 [2024-07-25 04:16:43.794625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.681 [2024-07-25 04:16:43.794653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.681 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.794810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.794858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.795934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.795963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.796131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.796159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.796327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.796366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.796488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.796515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.796689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.796733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.796935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.796977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.797129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.797156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.797306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.797332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.797477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.797523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.797657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.797702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.797883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.797927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.798122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.798357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.798503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.798680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.798851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.798998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.799152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.799344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.799489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.799693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.799853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.799896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.800045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.800071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.800249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.800276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.800422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.800466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.800642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.800669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.800814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.800844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.801019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.801064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.801236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.801270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.801412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.801457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.801630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.682 [2024-07-25 04:16:43.801674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.682 qpair failed and we were unable to recover it. 00:33:28.682 [2024-07-25 04:16:43.801843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.801872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.802061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.802208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.802445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.802647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.802835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.802979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.803176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.803336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.803513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.803680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.803896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.803924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.804087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.804117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.804288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.804318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.804499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.804540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.804769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.804798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.805292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.805470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.805667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.805860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.805999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.806200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.806385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.806540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.806715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.806918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.683 [2024-07-25 04:16:43.806951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.683 qpair failed and we were unable to recover it. 00:33:28.683 [2024-07-25 04:16:43.807145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.807173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.807342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.807369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.807495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.807522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.807734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.807768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.807976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.808168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.808344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.808549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.808708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.808924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.808953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.809081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.809110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.809248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.809292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.809467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.809493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.809633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.809663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.809826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.809855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.810072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.810263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.810473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.810674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.810850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.810972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.811122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.811317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.811510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.811706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.811890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.684 [2024-07-25 04:16:43.811920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.684 qpair failed and we were unable to recover it. 00:33:28.684 [2024-07-25 04:16:43.812122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.812151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.812297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.812323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.812495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.812542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.812667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.812696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.812835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.812861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.813863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.813890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.814961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.814987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.815128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.815169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.815351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.815378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.815526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.815552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.815713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.815742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.815882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.815911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.816074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.816101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.816218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.816270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.816447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.816476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.685 [2024-07-25 04:16:43.816670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.685 [2024-07-25 04:16:43.816696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.685 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.816871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.816899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.817952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.817980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.818169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.818198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.818381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.818407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.818538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.818567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.818728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.818929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.818955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.819116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.819162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.819322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.819352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.819517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.819546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.819694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.819720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.819869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.819896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.820058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.820083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.820231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.820264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.820436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.820465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.820633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.820659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.820778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.820820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.821961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.821990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.822184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.822213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.822388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.822536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.822579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.822739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.822768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.822911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.822937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.823082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.823109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.686 [2024-07-25 04:16:43.823306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.686 [2024-07-25 04:16:43.823334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.686 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.823477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.823504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.823695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.823724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.823888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.823917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.824937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.824966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.825161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.825337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.825500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.825861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.825986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.826183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.826356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.826507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.826718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.826907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.826941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.827112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.827142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.827310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.827337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.827484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.827527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.827664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.827693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.827857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.827883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.828031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.828075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.828235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.828271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.828493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.828520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.828700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.828729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.828864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.828893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.829061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.829087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.829234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.687 [2024-07-25 04:16:43.829267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.687 qpair failed and we were unable to recover it. 00:33:28.687 [2024-07-25 04:16:43.829445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.829470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.829656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.829682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.829849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.829878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.830910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.830953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.831114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.831142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.831297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.831323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.831514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.831543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.831706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.831735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.831885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.831912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.832044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.832071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.832280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.832307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.832459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.832486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.832676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.832705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.832864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.832893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.833084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.833231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.833447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.833625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.833816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.833999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.834046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.834207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.834233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.834412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.834441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.834612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.834646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.834807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.834833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.834977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.835003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.835184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.835212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.835424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.835451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.835618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.835648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.835788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.835817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.836008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.836034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.836195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.836224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.836368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.836398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.688 qpair failed and we were unable to recover it. 00:33:28.688 [2024-07-25 04:16:43.836563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.688 [2024-07-25 04:16:43.836589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.836735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.836762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.836940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.836966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.837126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.837166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.837343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.837371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.837493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.837519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.837668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.837695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.837887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.837917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.838083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.838112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.838318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.838345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.838507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.838536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.838699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.838728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.838902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.839023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.839049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.839227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.839262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.839408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.839434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.839625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.839654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.839849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.839877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.840072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.840277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.840471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.840614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.840804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.840978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.841149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.841288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.841490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.841656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.841825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.841851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.842899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.842929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.843119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.843148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.843290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.843316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.689 qpair failed and we were unable to recover it. 00:33:28.689 [2024-07-25 04:16:43.843464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.689 [2024-07-25 04:16:43.843490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.843616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.843642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.843833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.843859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.844032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.844061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.844192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.844222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.844404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.844430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.844624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.844653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.844835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.844861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.845928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.845954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.846093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.846236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.846395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.846610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.846808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.846980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.847132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.847308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.847477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.847649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.847865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.847894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.848911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.848952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.849151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.849177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.849326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.849353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.849498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.849529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.849706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.849733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.849908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.849934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.850077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.850106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.850271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.850301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.850458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.850484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.690 [2024-07-25 04:16:43.850638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.690 [2024-07-25 04:16:43.850681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.690 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.850813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.850842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.851013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.851039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.851207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.851235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.851413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.851442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.851605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.851631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.851795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.851824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.852009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.852038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.852236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.852269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.852406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.852432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.852556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.852582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.852793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.852820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.853020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.853049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.853234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.853269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.853445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.853470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.853646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.853868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.853915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.854058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.854084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.854231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.854281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.854423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.854452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.854613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.854639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.854838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.854867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.855971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.855999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.856158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.856187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.856357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.856384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.856505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.856532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.856696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.856725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.856888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.856914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.857074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.857263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.857431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.857604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.857821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.857986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.858012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.858159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.858186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.858363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.691 [2024-07-25 04:16:43.858392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.691 qpair failed and we were unable to recover it. 00:33:28.691 [2024-07-25 04:16:43.858546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.858571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.858692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.858736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.858884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.858913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.859113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.859294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.859458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.859654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.859809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.859981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.860124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.860331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.860531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.860728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.860894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.860920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.861931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.862963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.862989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.863165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.863195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.863365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.863392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.863562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.863592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.863754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.863783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.863951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.863978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.864112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.864141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.864282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.864311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.864460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.864490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.864665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.864708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.864875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.865070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.865096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.865236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.865274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.692 qpair failed and we were unable to recover it. 00:33:28.692 [2024-07-25 04:16:43.865435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.692 [2024-07-25 04:16:43.865464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.865635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.865662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.865829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.865858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.866949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.866976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.867104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.867131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.867318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.867345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.867511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.867540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.867718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.867747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.867887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.867913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.868106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.868135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.868271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.868300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.868463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.868490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.868685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.868714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.868890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.868918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.869949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.869978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.870132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.870162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.870304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.870331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.870487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.870513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.870702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.870728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.870884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.870927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.871151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.871296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.871464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.871670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.871858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.871976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.872185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.872419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.872591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.872761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.872952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.872978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.873180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.873209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.873349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.873547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.693 [2024-07-25 04:16:43.873573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.693 qpair failed and we were unable to recover it. 00:33:28.693 [2024-07-25 04:16:43.873710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.873735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.873844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.874042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.874068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.874262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.874291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.874450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.874479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.874655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.874681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.874829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.874855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.875944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.875987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.876160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.876189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.876364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.876390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.876504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.876546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.876681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.876710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.876900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.876927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.877107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.877136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.877326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.877355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.877523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.877718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.877747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.877905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.877934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.878100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.878129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.878301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.878328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.878498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.878540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.878683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.878709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.878860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.878886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.879108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.879134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.879261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.879288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.879452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.879481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.879642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.879676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.879822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.879848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.880039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.880068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.880254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.880281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.880434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.880460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.880627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.880657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.880819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.880848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.881912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.694 [2024-07-25 04:16:43.881940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.694 qpair failed and we were unable to recover it. 00:33:28.694 [2024-07-25 04:16:43.882113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.882138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.882270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.882297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.882506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.882534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.882704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.882730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.882868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.882897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.883932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.883958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.884128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.884157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.884284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.884311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.884456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.884498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.884706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.884750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.884932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.884960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.885104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.885131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.885340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.885370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.885509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.885536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.885662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.885689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.885901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.885930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.886093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.886123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.886307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.886336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.886489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.886516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.886665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.886692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.886824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.886850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.887003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.887029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.887146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.887178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.887308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.887353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.887688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.887718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.887866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.887894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.888875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.888902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.889053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.889079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.889256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.889286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.889440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.889466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.889631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.889660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.889838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.889868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.695 [2024-07-25 04:16:43.890035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.695 [2024-07-25 04:16:43.890061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.695 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.890235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.890273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.890401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.890430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.890576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.890602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.890750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.890794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.890931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.890965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.891964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.891990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.892117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.892143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.892258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.892285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.892459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.892502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.892647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.892673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.892816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.892858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.893971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.894144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.894331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.894358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.894474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.894522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.894684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.894713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.894880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.894906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.895889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.895915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.896972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.896998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.696 qpair failed and we were unable to recover it. 00:33:28.696 [2024-07-25 04:16:43.897127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-07-25 04:16:43.897153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.897324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.897354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.897502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.897528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.897671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.897697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.897846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.897871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.898031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.898057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.898220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.898257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.898419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.898447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.898642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.898668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.898828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.898858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.899898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.900110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.900280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.900427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.900656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.900843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.900986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.901028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.901184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.901214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.901382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.901408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.901602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.901631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.901806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.901835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.902949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.902979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.903151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.903177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.903302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.903346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.903516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.903546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.903681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.903707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.903898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.903927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.904970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.904997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.905145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.905175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.905345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.905372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.905493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.697 [2024-07-25 04:16:43.905519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.697 qpair failed and we were unable to recover it. 00:33:28.697 [2024-07-25 04:16:43.905694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.905736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.905885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.905911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.906939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.906965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.907108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.907134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.907289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.907315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.907444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.907470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.907631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.907660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.907820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.907849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.908873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.908993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.909186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.909364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.909536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.909703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.909865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.909895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.910093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.910119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.910288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.910318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.910472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.910501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.910694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.910720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.910851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.910877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.911963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.911991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.912170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.912196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.912346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.912373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.912495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.912519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.912692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.912734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.912884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.912910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.913028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.913053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.913226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.913263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.913434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.913461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.913634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.913663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.913827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.913856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.698 [2024-07-25 04:16:43.914006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.698 [2024-07-25 04:16:43.914036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.698 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.914212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.914237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.914368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.914394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.914546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.914572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.914706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.914735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.914888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.914917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.915138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.915352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.915518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.915689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.915830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.915983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.916219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.916405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.916559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.916737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.916906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.916935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.917066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.917096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.917268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.917295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.917464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.917494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.917657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.917687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.917857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.917884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.918097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.918328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.918529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.918680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.918823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.699 [2024-07-25 04:16:43.918981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.699 [2024-07-25 04:16:43.919007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.699 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.919123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.919167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.919361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.919391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.919535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.919561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.919770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.919800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.919961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.919990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.920138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.920164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.920308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.920335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.920519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.920545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.920664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.920690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.920835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.920878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.921008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.921037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.921194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.921223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.921398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.921428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.983 [2024-07-25 04:16:43.921577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.983 [2024-07-25 04:16:43.921605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.983 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.921787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.921813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.921941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.921968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.922137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.922166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.922370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.922397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.922522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.922553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.922702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.922732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.922883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.922910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.923958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.923984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.924127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.924170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.924324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.924352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.924474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.924500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.924678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.924704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.924852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.924878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.925826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.925862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.926868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.926992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.927162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.927338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.927493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.927664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.927840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.984 [2024-07-25 04:16:43.927870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.984 qpair failed and we were unable to recover it. 00:33:28.984 [2024-07-25 04:16:43.928046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.928253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.928442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.928600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.928770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.928910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.928936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.929850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.929977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.930148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.930335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.930489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.930668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.930876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.930905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.931941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.931969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.932949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.932976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.933120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.933147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.933301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.933328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.933444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.933470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.985 qpair failed and we were unable to recover it. 00:33:28.985 [2024-07-25 04:16:43.933603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.985 [2024-07-25 04:16:43.933629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.933781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.933825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.933987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.934219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.934371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.934580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.934778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.934947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.934974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.935118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.935147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.935325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.935356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.935476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.935503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.935670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.935700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.935891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.935917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.936956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.936983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.937120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.937150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.937319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.937345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.937461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.937487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.937647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.937688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.937908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.937934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.938104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.938285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.938312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.938435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.938462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.938613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.938640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.938836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.938865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.939947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.939976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.986 qpair failed and we were unable to recover it. 00:33:28.986 [2024-07-25 04:16:43.940146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.986 [2024-07-25 04:16:43.940175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.940321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.940348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.940497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.940523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.940668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.940695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.940823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.940849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.940995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.941201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.941398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.941545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.941702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.941876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.941902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.942835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.942861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.943035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.943078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.943260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.943287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.943405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.943432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.943603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.943646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.943843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.943872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.944933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.944958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.945158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.945336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.945507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.945711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.945845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.945993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.946019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.946176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.946202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.946345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.987 [2024-07-25 04:16:43.946372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-07-25 04:16:43.946496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.946522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.946647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.946672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.946791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.946817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.946986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.947157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.947355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.947579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.947726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.947900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.947945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.948112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.948141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.948284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.948310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.948439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.948465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.948629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.948658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.948858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.949903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.949929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.950956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.950982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.951132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.951158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.951299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.988 [2024-07-25 04:16:43.951326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-07-25 04:16:43.951450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.951477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.951631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.951657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.951808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.951834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.951979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.952138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.952354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.952504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.952682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.952855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.952884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.953083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.953264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.953454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.953625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.953800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.953994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.954966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.954995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.955969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.955995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.956114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.956140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.956326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.956352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.956504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.956530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.956674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.956704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.956823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.956852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.957058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.957085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.957237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.957271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.957412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.957441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.957651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.989 [2024-07-25 04:16:43.957677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-07-25 04:16:43.957848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.957874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.957995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.958170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.958352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.958500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.958649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.958831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.958857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.959848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.959874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.960837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.960983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.961841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.961980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.962846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.962991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.963035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.963218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.963269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.963436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.963462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.963633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.990 [2024-07-25 04:16:43.963666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.990 qpair failed and we were unable to recover it. 00:33:28.990 [2024-07-25 04:16:43.963826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.963855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.964842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.964873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.965086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.965248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.965628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.965804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.965982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.966162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.966310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.966554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.966745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.966901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.966927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.967919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.967945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.968137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.968167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.968338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.968365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.968540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.968566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.968677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.968703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.968851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.968878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.969076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.969281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.969429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.969598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.969774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.969974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.970000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.970142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.970168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.970290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-07-25 04:16:43.970318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.991 qpair failed and we were unable to recover it. 00:33:28.991 [2024-07-25 04:16:43.970472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.970498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.970648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.970674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.970821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.970851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.971965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.971992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.972171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.972322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.972488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.972653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.972824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.972993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.973195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.973377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.973577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.973723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.973891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.973917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.974071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.974098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.974253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.974300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.974461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.974490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.974636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.974661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.974808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.974834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.975872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.975898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.976066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.976095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.976259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-07-25 04:16:43.976313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.992 qpair failed and we were unable to recover it. 00:33:28.992 [2024-07-25 04:16:43.976467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.976494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.976640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.976670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.976856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.976883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.977914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.977944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.978937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.978963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.979134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.979160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.979298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.979328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.979498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.979526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.979693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.979723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.979860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.979889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.980081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.980229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.980432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.980603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.980818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.980986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.981013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.981162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.981188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.981330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.981357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.981499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-07-25 04:16:43.981529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.993 qpair failed and we were unable to recover it. 00:33:28.993 [2024-07-25 04:16:43.981668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.981695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.981843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.981885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.982044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.982073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.982210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.982236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.982404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.982433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.982619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.982648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.982865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.983891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.983917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.984045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.984071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.984240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.984403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.984429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.984620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.984649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.984813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.984843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.985933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.985960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.986903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.986929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.987119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.987145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.987295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.987338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.987496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.987526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.987702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.987728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.987918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.987948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.988132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.988160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.988310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.988338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.994 [2024-07-25 04:16:43.988531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.994 [2024-07-25 04:16:43.988560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.994 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.988695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.988724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.988890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.988916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.989952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.989978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.990153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.990297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.990450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.990644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.990840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.990983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.991128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.991291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.991489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.991677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.991905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.991931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.992968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.992994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.993120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.993146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.993312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.993342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.993476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.993505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.993657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.993683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.993833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.993860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.994007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.994209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.994235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.994387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.994416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.994549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.995 [2024-07-25 04:16:43.994591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.995 qpair failed and we were unable to recover it. 00:33:28.995 [2024-07-25 04:16:43.994766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.994792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.994984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.995177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.995403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.995603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.995779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.995962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.995989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.996109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.996135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.996285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.996315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.996481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.996506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.996696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.996726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.996864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.996893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.997050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.997078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.997208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.997237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.997429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.997467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.997647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.997675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.997798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.997898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.998061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.998091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.998275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.998303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.998451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.998503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.998745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.998776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.998953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.998979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.999134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.999160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.999325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.999355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.999548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.999575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.999703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.999729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:43.999906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:43.999949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.000147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.000303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.000449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.000647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.000792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.000974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.001004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.001140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.001166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.001355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.001385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.996 qpair failed and we were unable to recover it. 00:33:28.996 [2024-07-25 04:16:44.001515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.996 [2024-07-25 04:16:44.001545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.001716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.001743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.001912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.001941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.002927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.002954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.003165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.003487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.003634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.003782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.003959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.004161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.004378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.004541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.004731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.004933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.004964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.005135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.005164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.005328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.005357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.005505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.005531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.005676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.005719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.005897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.005950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.006146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.006292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.006492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.006660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.006808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.006976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.007002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.007120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.007147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.007338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.007368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.007515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.007547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.997 [2024-07-25 04:16:44.007713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.997 [2024-07-25 04:16:44.007741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.997 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.007912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.007942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.008100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.008129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.008297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.008325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.008472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.008513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.008707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.008758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.008930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.008957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.009124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.009154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.009316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.009347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.009513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.009539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.009703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.009733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.009864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.009894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.010093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.010122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.010303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.010330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.010477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.010504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.010711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.010737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.010868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.010897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.011095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.011293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.011493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.011655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.011845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.011976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.012003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.012146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.012173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.012331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.012358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.012533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.012581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.012831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.012878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.013018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.013044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.013181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.013208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.013361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.998 [2024-07-25 04:16:44.013389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.998 qpair failed and we were unable to recover it. 00:33:28.998 [2024-07-25 04:16:44.013537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.013565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.013715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.013890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.013933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.014099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.014126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.014314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.014344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.014531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.014561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.014722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.014749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.014898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.014925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.015091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.015120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.015301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.015328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.015479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.015523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.015714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.015743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.015875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.015901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.016026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.016052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.016267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.016293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.016465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.016491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.016686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.016716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.016900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.016948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.017122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.017148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.017308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.017338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.017523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.017553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.017722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.017749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.017870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.017915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.018077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.018106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.018279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.018307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.018482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.018512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.018667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.018696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.018843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.018869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.019019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.019045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.019215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.019259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.019433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.019459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.019610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.019636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.019803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.019850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.020044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.020071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.020229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.020266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.020412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.020443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:28.999 qpair failed and we were unable to recover it. 00:33:28.999 [2024-07-25 04:16:44.020597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-07-25 04:16:44.020623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.020789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.020819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.021009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.021055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.021229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.021261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.021424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.021453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.021641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.021670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.021818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.021844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.022919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.022947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.023095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.023122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.023346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.023372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.023543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.023570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.023710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.023740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.023912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.023939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.024111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.024137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.024326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.024355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.024514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.024543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.024705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.024732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.024918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.025898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.025924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.026099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.026302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.026471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.026669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.026841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.026995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-07-25 04:16:44.027021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.000 qpair failed and we were unable to recover it. 00:33:29.000 [2024-07-25 04:16:44.027185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.027215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.027356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.027382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.027506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.027533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.027720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.027768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.027932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.027962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.028117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.028143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.028333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.028362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.028529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.028556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.028726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.028752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.028893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.028919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.029142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.029288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.029461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.029655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.029842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.029978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.030211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.030382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.030574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.030766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.030946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.030972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.031121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.031164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.031326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.031353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.031475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.031501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.031669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.031698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.031892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.031917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.032081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.032110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.032273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.032303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.032445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.032472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.032623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.032667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.032827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.032857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.001 [2024-07-25 04:16:44.033932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-07-25 04:16:44.033958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.001 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.034172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.034487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.034662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.034806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.034996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.035164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.035313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.035492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.035731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.035904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.036091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.036135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.036297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.036333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.036474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.036501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.036675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.036705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.036862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.036888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.037034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.037079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.037266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.037295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.037443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.037469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.037620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.037664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.037856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.037906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.038087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.038129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.038332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.038359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.038506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.038722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.038748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.038916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.038944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.039128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.039156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.039294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.039322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.039510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.039540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.039714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.039740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.039914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.039940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.040101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.040130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.040261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.040290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.040443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.040469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.040620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.002 [2024-07-25 04:16:44.040665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.002 qpair failed and we were unable to recover it. 00:33:29.002 [2024-07-25 04:16:44.040843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.040869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.041043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.041237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.041415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.041598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.041790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.041976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.042144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.042322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.042490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.042689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.042865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.042891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.043918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.043961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.044151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.044321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.044459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.044700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.044850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.044996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.045039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.045200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.045230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.045434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.045460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.045617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.045643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.045846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.003 [2024-07-25 04:16:44.045899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.003 qpair failed and we were unable to recover it. 00:33:29.003 [2024-07-25 04:16:44.046098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.046124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.046313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.046342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.046501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.046530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.046673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.046700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.046826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.047924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.047949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.048932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.048975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.049140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.049334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.049361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.049483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.049526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.049686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.049715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.049887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.049913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.050080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.050106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.050240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.050276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.050455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.050485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.050682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.050712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.050869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.050898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.051062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.051248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.051418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.051588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.051812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.051977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.052006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.052143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.052170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.052318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.052361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.052492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.004 [2024-07-25 04:16:44.052522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.004 qpair failed and we were unable to recover it. 00:33:29.004 [2024-07-25 04:16:44.052700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.052727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.052876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.052902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.053024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.053050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.053304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.053330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.053480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.053507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.053704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.053730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.053851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.053877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.054957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.054983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.055107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.055150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.055337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.055366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.055511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.055683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.055726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.055885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.055915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.056939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.056968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.057958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.057987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.058129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.058156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.058300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.058328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.058522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.058548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.058661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.058688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.058835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.058877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.005 [2024-07-25 04:16:44.059033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.005 [2024-07-25 04:16:44.059062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.005 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.059267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.059294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.059464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.059494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.059651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.059680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.059819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.059845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.059989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.060204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.060366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.060542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.060738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.060933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.060960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.061111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.061154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.061315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.061346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.061521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.061547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.061718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.061889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.061918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.062115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.062141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.062319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.062348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.062487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.062517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.062713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.062739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.062912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.062942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.063124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.063153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.063326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.063353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.063480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.063525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.063688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.063717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.063875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.063901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.064876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.064902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.065047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.065073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.065207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.065240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.065424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.065451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.065623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.006 [2024-07-25 04:16:44.065653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.006 qpair failed and we were unable to recover it. 00:33:29.006 [2024-07-25 04:16:44.065819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.065847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.065983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.066136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.066311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.066521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.066675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.066887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.066916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.067080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.067106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.067298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.067328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.067489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.067518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.067676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.067703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.067856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.067882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.068936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.069127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.069153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.069304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.069331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.069496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.069524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.069686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.069715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.069882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.069909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.070074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.070103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.070265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.070295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.070433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.070459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.070605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.070631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.070826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.070885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.071851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.071881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.072042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.072068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.072179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.007 [2024-07-25 04:16:44.072205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.007 qpair failed and we were unable to recover it. 00:33:29.007 [2024-07-25 04:16:44.072360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.072389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.072527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.072558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.072700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.072742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.072927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.072956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.073208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.073413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.073553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.073720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.073868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.073989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.074015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.074196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.074222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.074401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.074431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.074594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.074623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.074811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.074837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.075970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.075997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.076187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.076216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.076411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.076442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.076606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.076632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.076786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.076812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.077856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.077899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.008 qpair failed and we were unable to recover it. 00:33:29.008 [2024-07-25 04:16:44.078068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.008 [2024-07-25 04:16:44.078097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.078262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.078289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.078437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.078464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.078626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.078654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.078819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.078846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.078977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.079020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.079186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.079215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.079420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.079446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.079619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.079649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.079814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.079843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.079981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.080150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.080308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.080518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.080661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.080859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.081956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.081985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.082155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.082181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.082362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.082407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.082539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.082584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.082759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.082786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.082950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.082979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.083142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.083172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.083342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.083368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.083561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.083590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.083807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.083854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.084002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.084028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.084175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.084220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.084414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.084444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.084592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.084618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.009 qpair failed and we were unable to recover it. 00:33:29.009 [2024-07-25 04:16:44.084767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.009 [2024-07-25 04:16:44.084793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.084971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.085169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.085349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.085501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.085673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.085873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.085915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.086082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.086249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.086387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.086585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.086799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.086981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.087137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.087313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.087484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.087731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.087960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.087987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.088166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.088196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.088368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.088398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.088572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.088598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.088771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.088797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.088941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.088970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.089159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.089188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.089345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.089372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.089496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.089522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.089669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.089696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.089880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.089909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.090096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.090125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.090295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.090322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.090487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.090532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.090662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.090691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.090848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.090874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.091036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.091067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.091225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.091261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.010 qpair failed and we were unable to recover it. 00:33:29.010 [2024-07-25 04:16:44.091435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-25 04:16:44.091461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.091633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.091676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.091863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.091913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.092859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.092889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.093955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.093984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.094128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.094154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.094305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.094349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.094537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.094566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.094756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.094782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.094975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.095166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.095379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.095550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.095762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.095971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.095997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.096171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.096200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.096373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.096400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.096549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.096576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.096740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.096769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.096968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.096994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.097117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.097143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.097294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.097321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.097494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.097520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.097662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.097688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.097858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.097887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.098038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.098065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.098213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-25 04:16:44.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.011 qpair failed and we were unable to recover it. 00:33:29.011 [2024-07-25 04:16:44.098409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.098438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.098599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.098629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.098798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.098824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.098946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.098972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.099955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.099984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.100175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.100203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.100351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.100382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.100504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.100547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.100683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.100711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.100840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.100866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.101016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.101057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.101220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.101282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.101460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.101487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.101653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.101682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.101869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.101898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.102067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.102214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.102397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.102594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.102775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.102977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.103173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.103330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.103532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.103729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.103947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.103976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.104132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.104161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.104339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.104366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.104495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.104542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.104739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.104768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.012 [2024-07-25 04:16:44.104939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-25 04:16:44.104967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.012 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.105133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.105164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.105345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.105372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.105524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.105551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.105690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.105720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.105885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.105914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.106056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.106083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.106254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.106281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.106413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.106442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.106618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.106644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.106806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.106835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.107946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.107980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.108147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.108190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.108336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.108362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.108512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.108556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.108724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.108753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.108926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.108952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.109096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.109282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.109436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.109634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.013 [2024-07-25 04:16:44.109824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.013 qpair failed and we were unable to recover it. 00:33:29.013 [2024-07-25 04:16:44.109964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.109990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.110141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.110167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.110325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.110355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.110519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.110545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.110691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.110735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.110896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.110926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.111094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.111299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.111517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.111688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.111839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.111988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.112208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.112367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.112511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.112653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.112859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.112904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.113877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.113906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.114952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.114978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.115145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.115174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.115325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.115351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.115516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.115545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.115703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.115732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.115888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.115914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.116036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.116062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.116209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.116235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.116361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.014 [2024-07-25 04:16:44.116387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.014 qpair failed and we were unable to recover it. 00:33:29.014 [2024-07-25 04:16:44.116504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.116530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.116679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.116852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.116880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.116999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.117025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.117173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.117203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.117407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.117434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.117612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.117641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.117797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.117826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.118007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.118033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.118223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.118260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.118447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.118476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.118689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.118714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.118850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.118879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.119097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.119297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.119485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.119652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.119850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.119973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.120189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.120372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.120538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.120704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.120873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.120899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.121093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.121300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.121450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.121616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.121830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.121990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.122153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.122327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.122533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.122682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.122842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.122871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.015 [2024-07-25 04:16:44.123037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.015 [2024-07-25 04:16:44.123064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.015 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.123213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.123239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.123425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.123454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.123621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.123647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.123770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.123814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.123977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.124148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.124362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.124548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.124722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.124899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.124925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.125074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.125262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.125456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.125682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.125833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.125961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.126164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.126336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.126495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.126631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.126786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.126813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.127945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.127971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.128862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.128888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.129050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.129079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.129240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.016 [2024-07-25 04:16:44.129277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.016 qpair failed and we were unable to recover it. 00:33:29.016 [2024-07-25 04:16:44.129451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.129477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.129592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.129640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.129777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.129806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.129993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.130181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.130374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.130526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.130744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.130895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.130924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.131945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.131974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.132121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.132147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.132293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.132337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.132492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.132521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.132676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.132702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.132858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.132887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.133888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.133914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.134090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.134119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.134292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.134318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.134469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.134514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.134644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.134673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.134866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.135014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.135040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.135158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.135183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.135299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.135325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.135449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.135476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.017 qpair failed and we were unable to recover it. 00:33:29.017 [2024-07-25 04:16:44.135603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.017 [2024-07-25 04:16:44.135629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.135756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.135782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.135906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.135932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.136134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.136326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.136462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.136661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.136980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.137163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.137358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.137534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.137716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.137915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.137942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.138098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.138124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.138267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.138440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.138466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.138608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.138635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.138832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.138861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.139951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.139980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.140115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.140141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.140306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.140344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.140534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.140564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.140756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.140781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.140919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.140948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.141071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.018 qpair failed and we were unable to recover it. 00:33:29.018 [2024-07-25 04:16:44.141327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.018 [2024-07-25 04:16:44.141354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.141539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.141567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.141743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.141769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.141892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.142952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.142978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.143130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.143159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.143296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.143326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.143470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.143496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.143612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.143638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.143807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.143836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.144905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.144935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.145081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.145108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.145253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.145280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.145449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.145478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.145646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.145672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.145845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.145874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.146944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.146986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.147149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.147342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.147510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.147693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.147882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.019 [2024-07-25 04:16:44.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.019 [2024-07-25 04:16:44.148041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.019 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.148195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.148426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.148575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.148718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.148878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.148998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.149189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.149360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.149501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.149671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.149867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.149894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.150915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.150942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.151061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.151260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.151289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.151485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.151511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.151646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.151675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.151831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.151860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.152966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.152992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.153157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.153186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.153343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.153370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.153483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.153509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.153666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.153692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.153867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.153894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.154013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.154040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.154278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.154322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.154467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.154494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.020 qpair failed and we were unable to recover it. 00:33:29.020 [2024-07-25 04:16:44.154705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.020 [2024-07-25 04:16:44.154731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.154855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.154880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.155029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.155072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.155240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.155276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.155444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.155471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.155590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.155633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.155819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.156927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.156955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.157095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.157124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.157254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.157300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.157459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.157485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.157638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.157804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.157845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.158921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.158948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.159967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.159995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.160163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.160192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.160342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.160369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.160563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.160592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.160724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.160753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.160925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.160951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.161069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.161095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.161217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.021 [2024-07-25 04:16:44.161250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.021 qpair failed and we were unable to recover it. 00:33:29.021 [2024-07-25 04:16:44.161397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.161423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.161607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.161636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.161777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.161826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.161995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.162145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.162339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.162525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.162703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.162885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.162915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.163078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.163266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.163299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.163463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.163494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.163693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.163888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.163916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.164952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.164981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.165131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.165160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.165327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.165354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.165472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.165515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.165690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.165717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.165884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.165914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.166076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.166105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.166250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.166280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.166450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.166476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.022 qpair failed and we were unable to recover it. 00:33:29.022 [2024-07-25 04:16:44.166671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.022 [2024-07-25 04:16:44.166701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.166910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.166959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.167117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.167143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.167301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.167331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.167462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.167635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.167661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.167821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.167851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.168872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.168901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.169916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.169960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.170147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.170176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.170326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.170353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.170507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.170533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.170657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.170683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.170862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.170888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.171962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.171988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.172133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.172159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.172325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.172369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.172547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.172574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.172741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.172771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.172953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.023 [2024-07-25 04:16:44.172999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.023 qpair failed and we were unable to recover it. 00:33:29.023 [2024-07-25 04:16:44.173177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.173207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.173378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.173411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.173535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.173561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.173713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.173739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.173909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.173940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.174105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.174337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.174365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.174514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.174544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.174699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.174729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.174899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.174925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.175072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.175485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.175650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.175808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.175983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.176172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.176371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.176543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.176693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.176920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.176947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.177890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.177918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.178974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.178999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.179151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.024 [2024-07-25 04:16:44.179177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.024 qpair failed and we were unable to recover it. 00:33:29.024 [2024-07-25 04:16:44.179301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.179329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.179449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.179476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.179601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.179780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.179807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.179926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.179952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.180096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.180123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.180274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.180302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.180428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.180454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.180605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.180631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.180803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.180855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.181900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.181929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.182106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.182134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.182298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.182325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.182501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.182545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.182733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.182763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.182908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.182936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.183934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.183959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.184087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.184113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.184306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.184333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.184452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.184478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.184620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.184663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.184851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.184879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.025 [2024-07-25 04:16:44.185017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.025 [2024-07-25 04:16:44.185042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.025 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.185189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.185215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.185363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.185390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.185546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.185573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.185706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.185734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.185867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.185895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.186962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.186988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.187123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.187152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.187323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.187349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.187498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.187538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.187716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.187742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.187876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.187909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.188922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.188948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.189159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.189185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.189328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.189355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.189465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.189506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.189669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.189700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.189903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.189929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.190093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.190121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.190301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.190327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.190507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.190532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.190693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.190722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.190914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.190963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.191111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.191137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.191285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.191328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.191485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.191515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.191685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.191711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.191896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.191950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.026 qpair failed and we were unable to recover it. 00:33:29.026 [2024-07-25 04:16:44.192106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.026 [2024-07-25 04:16:44.192135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.192308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.192334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.192449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.192492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.192678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.192707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.192872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.192898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.193057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.193230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.193431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.193627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.193815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.193980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.194165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.194336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.194527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.194699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.194847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.194873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.195972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.195999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.196147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.196192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.196367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.196393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.196509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.196536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.196719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.196745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.196899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.196925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.197137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.197294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.197473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.197666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.197845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.197981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.198010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.198190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.027 [2024-07-25 04:16:44.198219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.027 qpair failed and we were unable to recover it. 00:33:29.027 [2024-07-25 04:16:44.198372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.198399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.198525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.198550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.198714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.198742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.198901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.198929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.199952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.199981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.200117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.200145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.200297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.200324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.200482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.200520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.200699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.200744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.200948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.200992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.201142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.201167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.201320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.201347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.201493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.201701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.201744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.201911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.201953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.202099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.202125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.202291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.202322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.202487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.202515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.202651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.202680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.202840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.202868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.203058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.203235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.203450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.203613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.203773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.203958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.204144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.204296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.204483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.204710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.204908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.204951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.205071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.205098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.205278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.205305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.028 qpair failed and we were unable to recover it. 00:33:29.028 [2024-07-25 04:16:44.205448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.028 [2024-07-25 04:16:44.205474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.205627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.205678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.205844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.205887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.206078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.206127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.206293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.206322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.206531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.206574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.206717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.206761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.206947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.206997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.207148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.207175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.207348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.207562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.207592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.207729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.207758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.207974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.208023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.208188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.208214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.208373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.208400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.208549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.208578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.208792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.208986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.209014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.209148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.209176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.209376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.209415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.209554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.209582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.209779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.209822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.210905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.210931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.211967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.211996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.212164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.212192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.212320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.212348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.212462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.212490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.212659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.212704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.212873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.212917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.213091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.213117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.213307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.213337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.213516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.213564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.213767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.213810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.214021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.029 [2024-07-25 04:16:44.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.029 qpair failed and we were unable to recover it. 00:33:29.029 [2024-07-25 04:16:44.214178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.214205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.214382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.214409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.214562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.214588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.214733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.214778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.214973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.215164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.215344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.215528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.215690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.215922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.215950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.216102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.216130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.216329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.216355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.216503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.216545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.216705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.216733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.216888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.216917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.217080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.217109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.217288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.217314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.217486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.217513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.217678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.217709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.217873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.218128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.218157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.218338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.218365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.218515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.218541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.218743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.218793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.218988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.219021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.219158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.219187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.219384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.219410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.219605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.219634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.219797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.219846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.219999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.220148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.220336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.220533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.220714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.220938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.220991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.221174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.221203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.221378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.221405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.221543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.221572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.221760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.221827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.221985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.222015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.222178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.222206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.222357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.222384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.222546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.222589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.030 [2024-07-25 04:16:44.222776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.030 [2024-07-25 04:16:44.222804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.030 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.223137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.223473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.223622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.223825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.223982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.224011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.224148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.224174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.224355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.224382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.224555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.224584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.224797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.224848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.225914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.226136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.226165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.226324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.226350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.226500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.226526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.226660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.226689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.226887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.226929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.227865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.227894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.228081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.228110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.228235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.228292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.228418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.228444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.228606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.228645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.228849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.228895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.229090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.229119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.229312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.229339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.229552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.229581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.229779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.229824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.229967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.230010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.230166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.230193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.230372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.230418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.230585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.230614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.031 [2024-07-25 04:16:44.230808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.031 [2024-07-25 04:16:44.230852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.031 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.231931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.231960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.232151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.232179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.232339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.232369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.232516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.232561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.232748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.232776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.233033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.233261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.233426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.233617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.233782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.233975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.234004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.234193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.234221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.234419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.234445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.234614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.234643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.234837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.234890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.235048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.235077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.235237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.235288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.235444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.235470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.235615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.235658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.235844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.235873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.236038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.236067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.236196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.236225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.236415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.032 [2024-07-25 04:16:44.236441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.032 qpair failed and we were unable to recover it. 00:33:29.032 [2024-07-25 04:16:44.236615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.236641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.236816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.236846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.237008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.237036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.237226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.237259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.237411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.237437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.237606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.237635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.237851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.237905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.238935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.238979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.239109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.239138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.239285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.239312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.239463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.239489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.239683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.239711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.239872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.239900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.240080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.240109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.240273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.240315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.240469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.240495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.240672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.240713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.240887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.240915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.241948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.241977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.242127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.242156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.242301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.242327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.242474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.242500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.242664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.242693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.242855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.242883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.243066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.243108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.243263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.243305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.243427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.243453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.033 qpair failed and we were unable to recover it. 00:33:29.033 [2024-07-25 04:16:44.243563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.033 [2024-07-25 04:16:44.243589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.243774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.243804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.243987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.244017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.244205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.244234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.244414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.244440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.244560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.244585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.244776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.244806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.244992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.245175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.245359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.245504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.245726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.245943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.245972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.246129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.246158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.246347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.246373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.246500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.246526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.246674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.246699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.246878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.246938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.247125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.247153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.247326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.247353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.247468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.247494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.247667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.247693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.247836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.247864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.248082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.248441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.248643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.248819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.248976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.249170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.249371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.249545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.249716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.249850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.249891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.250060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.250089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.250297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.250341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.250483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.250509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.250684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.034 [2024-07-25 04:16:44.250712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.034 qpair failed and we were unable to recover it. 00:33:29.034 [2024-07-25 04:16:44.250876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.250904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.251103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.251290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.251462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.251636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.251808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.251981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.252169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.252406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.252589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.252753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.252964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.252992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.253163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.253189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.253421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.253451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.253615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.253644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.253816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.253842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.253963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.254005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.035 [2024-07-25 04:16:44.254145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.035 [2024-07-25 04:16:44.254174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.035 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.254351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.254377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.254533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.254559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.254679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.254705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.254829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.254855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.254975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.255142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.255357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.255514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.255704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.255882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.255908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.256957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.256983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.257100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.257126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.257302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.257332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.257513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.318 [2024-07-25 04:16:44.257539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.318 qpair failed and we were unable to recover it. 00:33:29.318 [2024-07-25 04:16:44.257676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.257705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.257850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.257879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.258891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.258917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.259916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.259942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.260876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.260901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.261949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.261975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.262966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.262992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.263105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.263130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.263266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.263295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.263438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.263464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.263617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.263659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.263812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.319 [2024-07-25 04:16:44.263841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.319 qpair failed and we were unable to recover it. 00:33:29.319 [2024-07-25 04:16:44.264015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.264189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.264371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.264515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.264658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.264889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.264917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.265909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.265937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.266830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.266858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.267939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.267969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.268946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.268976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.269118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.269161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.269333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.269359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.269499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.269525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.269668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.269707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.269908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.269939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.270117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.270147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.270315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.270344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.270458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.320 [2024-07-25 04:16:44.270484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.320 [2024-07-25 04:16:44.270628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.270672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.270837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.270880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.271041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.271085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.271205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.271231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.271378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.271404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.271571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.271600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.271793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.271837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.272028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.272073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.272224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.272260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.272392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.272418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.272582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.272627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.272801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.272846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.273044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.273089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.273208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.273234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.273392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.273419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.273583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.273628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.273765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.273814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.274951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.274977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.275126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.275152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.275290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.275320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.275483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.275512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.275701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.275745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.275901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.275944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.276932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.276961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.277119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.277147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.277298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.277329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.277459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.277485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-07-25 04:16:44.277697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-07-25 04:16:44.277726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.277881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.277910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.278069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.278272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.278466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.278640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.278828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.278987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.279171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.279382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.279574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.279740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.279931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.279959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.280130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.280309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.280461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.280655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.280842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.280981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.281187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.281406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.281575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.281739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.281927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.281956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.282110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.282138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.282314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.282457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.282483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.282663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.282689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.282820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.282849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.283063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.283092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.283283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.283310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.283448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.283474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.283680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.283709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.283938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.283984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.284169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.284197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.284373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.284522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.284548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.284706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.284735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-07-25 04:16:44.284895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-07-25 04:16:44.284924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.285150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.285178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.285329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.285356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.285503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.285529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.285677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.285703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.285882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.285911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.286071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.286231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.286437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.286607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.286837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.286982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.287171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.287349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.287525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.287678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.287904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.287972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.288159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.288188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.288331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.288357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.288508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.288533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.288696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.288725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.288909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.288951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.289189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.289215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.289377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.289403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.289525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.289550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.289669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.289695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.289816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.289842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-07-25 04:16:44.290971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-07-25 04:16:44.290999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.291129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.291158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.291302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.291329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.291514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.291557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.291682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.291711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.291874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.291900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.292912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.292938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.293128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.293157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.293325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.293351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.293524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.293550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.293669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.293695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.293847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.293873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.294901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.294930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.295095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.295121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.295303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.295330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.295481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.295719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.295744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.295887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.295917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.296084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.296113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.296278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.296304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.296495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.296524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.296694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.296720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.296863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.296889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-07-25 04:16:44.297809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-07-25 04:16:44.297836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.297980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.298190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.298363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.298536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.298727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.298891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.298916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.299067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.299093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.299298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.299327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.299495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.299520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.299693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.299722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.299851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.299879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.300056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.300082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.300278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.300308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.300442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.300471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.300635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.300661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.300823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.300852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.301038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.301067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.301235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.301268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.301444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.301473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.301604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.301633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.301813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.301840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.302869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.302895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.303091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.303297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.303324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.303443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.303468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.303643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.303685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.303817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.304878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-07-25 04:16:44.304904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-07-25 04:16:44.305052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.305077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.305254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.305284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.305439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.305467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.305608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.305633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.305774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.305799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.305972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.306168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.306392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.306561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.306731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.306909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.306935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.307855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.307898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.308949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.308992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.309952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.309982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.310182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.310359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.310505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.310659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.310827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.310975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.311001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.311182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.311225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.311400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.311426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-07-25 04:16:44.311591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-07-25 04:16:44.311620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.311757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.311785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.311957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.311983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.312104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.312130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.312335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.312365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.312510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.312536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.312722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.312751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.312884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.313079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.313105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.313271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.313301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.313469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.313498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.313688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.313714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.313863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.313892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.314946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.314993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.315197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.315223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.315382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.315408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.315585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.315616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.315753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.315783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.315959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.315986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.316129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.316159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.316324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.316353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.316494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.316520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.316694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.316736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.316899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.316928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.317099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.317292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.317456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.317654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-07-25 04:16:44.317825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-07-25 04:16:44.317955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.317998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.318184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.318213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.318360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.318386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.318505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.318548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.318716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.318741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.318865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.318891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.319908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.319934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.320948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.320978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.321110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.321139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.321278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.321305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.321450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.321477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.321629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.321657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.321801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.321828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.322912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.322938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.323114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.323156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.323315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.323344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.323512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.323538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.323677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.323719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.323912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.323941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.324079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.324105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.324231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.324272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.324427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.324453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-07-25 04:16:44.324573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-07-25 04:16:44.324599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.324745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.324789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.324985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.325172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.325345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.325510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.325658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.325806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.325832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.326050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.326237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.326441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.326633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.326833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.326998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.327182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.327384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.327563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.327761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.327906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.327932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.328963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.328989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.329132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.329174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.329315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.329358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.329544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.329571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.329776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.329806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.329982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.330211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.330391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.330584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.330755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.330903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.330929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.331106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.331135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.331274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.331301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.331450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-07-25 04:16:44.331475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-07-25 04:16:44.331616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.331645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.331841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.332937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.332964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.333163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.333191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.333382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.333421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.333551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.333578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.333740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.333770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.333900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.333929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.334107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.334132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.334260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.334286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.334468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.334493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.334658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.334683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.334827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.334871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.335956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.335985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.336172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.336201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.336361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.336387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.336501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.336527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.336698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.336741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.336909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.336936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.337062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.337088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.337229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.337261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.337432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.337463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.337623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.337652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.337815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.337866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.338013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.338040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.338211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.338237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.338365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.338391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.338546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-07-25 04:16:44.338571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-07-25 04:16:44.338699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.338725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.338865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.338891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.339085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.339266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.339432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.339633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.339844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.339984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.340210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.340359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.340509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.340672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.340885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.340913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.341073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.341100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.341270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.341296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.341427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.341451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.341597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.341621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.341816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.341841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.342033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.342197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.342398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.342622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.342976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.343123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.343323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.343492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.343667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.343871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.343896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.344946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.344979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.345130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.345156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.345272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.345298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.345439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.345465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-07-25 04:16:44.345683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-07-25 04:16:44.345708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.345843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.345872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.346000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.346029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.346253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.346296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.346449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.346474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.346654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.346679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.346829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.346855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.347051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.347230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.347417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.347615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.347825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.347974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.348108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.348326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.348502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.348652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.348835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.348865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.349039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.349065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.349225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.349261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.349405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.349433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.349580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.349754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.349796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.350057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.350116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.350269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.350295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.350418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.350443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.350596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.350625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.350813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.350839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-07-25 04:16:44.351866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-07-25 04:16:44.351892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.352873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.352900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.353951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.353976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.354168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.354196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.354347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.354374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.354497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.354523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.354668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.354711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.354864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.354893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.355924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.355952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.356962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.356989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.357157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.357339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.357511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.357675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.357822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.357986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.358015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.358179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.358204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.358355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.358382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-07-25 04:16:44.358506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-07-25 04:16:44.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.358741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.358931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.358960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.359954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.359980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.360954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.360982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.361113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.361142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.361308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.361334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.361488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.361514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.361660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.361686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.361860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.361885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.362075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.362249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.362438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.362601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.362823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.362987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.363141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.363320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.363490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.363657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.363840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.363870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.364950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.364979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.365143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.365169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.365297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-07-25 04:16:44.365323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-07-25 04:16:44.365453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.365631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.365657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.365796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.365824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.365990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.366167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.366345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.366554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.366749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.366898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.366923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.367121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.367301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.367478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.367682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.367855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.367999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.368205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.368395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.368537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.368780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.368921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.368947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.369141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.369314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.369700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.369858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.369992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.370190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.370380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.370546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.370693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.370868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.370897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.371067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.371093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.371211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.371264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.371457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.371486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.371661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.371687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.371825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.371851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.372000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.372042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.372237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.372268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-07-25 04:16:44.372398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-07-25 04:16:44.372424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.372534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.372560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.372682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.372707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.372833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.372859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.372977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.373172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.373401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.373571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.373745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.373892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.373918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.374044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.374072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.374225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.374258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.374411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.374437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.374617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.374646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.374811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.374837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.375028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.375056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.375199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.375227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.375411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.375437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.375591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.375629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.375817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.375860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.376932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.376966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.377135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.377316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.377343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.377487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.377513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.377670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.377695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.377895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.377925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.378059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.378090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.378236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.378274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.378401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.378426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.378571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.378601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-07-25 04:16:44.378780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-07-25 04:16:44.378806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.378951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.378993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.379158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.379187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.379336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.379363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.379483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.379509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.379684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.379713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.379856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.379881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.380032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.380240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.380420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.380646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.380840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.380989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.381015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.381192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.381221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.381365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.381391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.381518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.381544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.381696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.381723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.381965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.382845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.382982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.383164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.383372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.383518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.383753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.383894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.383919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.384936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.384965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.385153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-07-25 04:16:44.385182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-07-25 04:16:44.385352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.385379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.385521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.385547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.385670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.385698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.385886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.385913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.386117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.386146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.386316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.386343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.386468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.386494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.386643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.386686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.386887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.386939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.387093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.387120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.387272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.387299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.387419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.387446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.387563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.387589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.387782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.387812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.388873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.388902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.389120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.389307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.389486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.389661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.389854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.389976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.390112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.390266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.390445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.390658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.390854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.390881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.391947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-07-25 04:16:44.391973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-07-25 04:16:44.392139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.392168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.392373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.392400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.392510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.392537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.392772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.392801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.392955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.393001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.393159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.393188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.393379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.393405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.393548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.393575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.393819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.393845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.394013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.394042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.394209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.394239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.394430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.394458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.394605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.394632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.394796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.394837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.395033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.395200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.395601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.395791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.395975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.396005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.396169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.396199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.396378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.396405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.396604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.396633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.396790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.396837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.396982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.397011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.397187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.397216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.397400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.397439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.397611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.397656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.397829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.397872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.398935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.398961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.399076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.399102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.399218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.399251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-07-25 04:16:44.399379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-07-25 04:16:44.399410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.399648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.399675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.399829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.399855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.400972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.400999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.401141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.401167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.401322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.401350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.401477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.401503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.401671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.401715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.401856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.401885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.402936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.402963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.403923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.403953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.404127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.404157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.404336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.404365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.404515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.404558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.404739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.404782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.404925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.404968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.405142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.405168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.405326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.405371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.405541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.405585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.405753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.405800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.405971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.406015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.406144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.406172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.406298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.406324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-07-25 04:16:44.406477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-07-25 04:16:44.406519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.406658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.406686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.406845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.406873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.407042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.407071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.407217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.407249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.407402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.407428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.407597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.407626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.407846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.407875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.408070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.408231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.408456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.408810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.408987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.409191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.409364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.409525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.409710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.409905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.409948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.410109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.410135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.410314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.410341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.410502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.410528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.410688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.410714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.410959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.411157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.411183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.411355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.411386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.411549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.411577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.411751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.411777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.411976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.412189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.412380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.412540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.412734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-07-25 04:16:44.412919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-07-25 04:16:44.412948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.413142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.413188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.413316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.413343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.413510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.413554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.413721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.414940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.414989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.415144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.415172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.415348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.415524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.415550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.415680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.415709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.415874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.415903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.416091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.416120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.416295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.416321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.416464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.416489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.416633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.416662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.416848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.416877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.417924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.417975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.418112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.418141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.418310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.418336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.418484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.418527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.418658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.418687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.418829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.418876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.419058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.419087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.419247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.419291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.419409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.419435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.419601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.419788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.419816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.420008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.420041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-07-25 04:16:44.420177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-07-25 04:16:44.420206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.420352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.420379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.420498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.420523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.420642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.420667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.420789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.420832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.421883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.421912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.422927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.422956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.423091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.423120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.423321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.423347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.423497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.423522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.423699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.423724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.423847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.423889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.424939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.424967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.425129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.425158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.425320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.425358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.425518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.425545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.425697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.425740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.425915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.425958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.426117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.426143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.426296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.426323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.426471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.426514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.426710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.426739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.426904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.426949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.427081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-07-25 04:16:44.427109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-07-25 04:16:44.427238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.427403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.427429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.427574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.427603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.427772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.427801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.427933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.427962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.428127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.428156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.428329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.428356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.428498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.428541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.428699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.428737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.428918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.428946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.429111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.429301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.429450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.429623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.429976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.430159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.430324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.430495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.430663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.430902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.430953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.431093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.431122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.431297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.431323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.431448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.431474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.431622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.431647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.431817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.431845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.432912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.432940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.433108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.433331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.433357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.433535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.433565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.433713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.433739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.433897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.434070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-25 04:16:44.434099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-07-25 04:16:44.434248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.434446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.434472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.434617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.434645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.434780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.434824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.434990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.435156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.435322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.435497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.435725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.435915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.435943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.436967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.436995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.437126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.437155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.437306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.437337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.437448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.437473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.437592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.437634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.437806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.437832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.438946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.438974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.439116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.439145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.439322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.439348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.439538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.439567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.439695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.439745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.439927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.439955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.440144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.440173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.440333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.440372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.440531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.440558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.440708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.440752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.440941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.440996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.441152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.441177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-07-25 04:16:44.441302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-25 04:16:44.441329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.441472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.441517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.441715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.441758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.441918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.441962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.442134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.442161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.442334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.442361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.442510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.442540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.442738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.442786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.442995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.443044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.443176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.443206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.443365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.443394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.443562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.443802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.443832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.443980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.444031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.444199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.444225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.444378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.444404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.444609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.444639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.444796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.444840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.445015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.445060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.445221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.445255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.445413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.445439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.445608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.445652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.445904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.445952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.446103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.446130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.446292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.446320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.446495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.446525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.446719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.446746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.446918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.446962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.447115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.447143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.447296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.447322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.447478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.447504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.447660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.447685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.447870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-25 04:16:44.447918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-07-25 04:16:44.448155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.448209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.448409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.448435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.448602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.448631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.448765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.448793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.448978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.449162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.449386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.449554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.449698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.449897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.449940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.450078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.450106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.450236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.450271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.450466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.450492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.450665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.450692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.450862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.450892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.451049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.451078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.451268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.451311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.451426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.451452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.451654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.451682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.451878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.451926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.452088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.452116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.452259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.452317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.452473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.452499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.452669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.452699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.452831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.452860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.453953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.453982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.454111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.454139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.454321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.454347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.454471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.454498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.454711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.454737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.454895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.454923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.455091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.455119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.455293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.455320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-07-25 04:16:44.455460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-07-25 04:16:44.455485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.455659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.455688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.455840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.455883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.456066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.456095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.456233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.456283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.456409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.456434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.456618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.456661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.456880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.456909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.457906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.457936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.458124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.458153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.458305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.458331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.458448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.458474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.458626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.458847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.458875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.459066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.459281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.459425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.459606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.459798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.459970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.460024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.460185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.460213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.460388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.460582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.460610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.460741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.460769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.460998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.461048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.461183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.461216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.461392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.461417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.461589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.461615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.461804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.461833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.461993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.462021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.462258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.462301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.462449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.462474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.462638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-07-25 04:16:44.462667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-07-25 04:16:44.462841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.462867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.463022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.463047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.463236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.463272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.463440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.463466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.463614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.463656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.463816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.463844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.464958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.464986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.465214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.465249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.465446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.465472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.465647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.465675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.465812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.465837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.465977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.466003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.466150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.466176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.466336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.466363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.466536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.466566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.466760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.466788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.466985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.467156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.467347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.467527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.467699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.467868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.467896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.468931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.468959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.469128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.469154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.469277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.469321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.469461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.469491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.469684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.469710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.469847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.469890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-07-25 04:16:44.470041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-07-25 04:16:44.470070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.470229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.470262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.470382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.470409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.470559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.470584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.470770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.470796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.470985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.471166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.471353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.471567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.471733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.471904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.471930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.472105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.472130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.472286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.472316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.472457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.472483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.472656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.472699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.472905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.472931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.473103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.473129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.473315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.473341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.473489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.473514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.473638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.473665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.473802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.473846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.474910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.474936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.475097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.475123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.475315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.475343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.475494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.475520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.475729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.475758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.475923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.475949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.476073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.476117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-07-25 04:16:44.476268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-07-25 04:16:44.476313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.476487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.476513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.476710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.476762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.476902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.476931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.477088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.477116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.477318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.477357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.477513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.477540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.477662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.477688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.477849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.477892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.478079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.478107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.478264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.478292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.478459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.478485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.478661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.478690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.478879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.478930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.479081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.479294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.479483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.479628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.479805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.479985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.480173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.480346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.480514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.480671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.480845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.480871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.481048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.481216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.481417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.481561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.481795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.481990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.482197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.482350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.482491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.482672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.482892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.482921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.483076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.483105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.483272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-07-25 04:16:44.483299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-07-25 04:16:44.483451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.483477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.483657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.483686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.483829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.483855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.484113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.484333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.484491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.484679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.484845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.484986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.485944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.485971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.486095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.486137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.486258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.486303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.486431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.486457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.486644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.486683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.486905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.486932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.487967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.488141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.488167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.488315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.488341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.488527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.488570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.488721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.488748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.488897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.488924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.489950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.489993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.490133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-07-25 04:16:44.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-07-25 04:16:44.490327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.490353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.490505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.490678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.490703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.490823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.490849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.491888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.491914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.492040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.492066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.492261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.492306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.492456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.492482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.492624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.492653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.492815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.492844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.493901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.493927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.494101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.494275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.494483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.494668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.494832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.494980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.495011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.495202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.495231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.495374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.495400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.495551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.495578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.495787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.495814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.496305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.496462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.496643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.496840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.496989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.497015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.497138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-07-25 04:16:44.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-07-25 04:16:44.497371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.497398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.497550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.497576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.497698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.497743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.497903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.497932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.498129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.498155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.498294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.498321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.498444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.498470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.498619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.498645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.498767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.498793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.499020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.499045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.499220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.499251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.499428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.499454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.499630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.499659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.499889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.499915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.500115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.500144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.500300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.500342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.500514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.500540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.500705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.500734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.500895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.500924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.501112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.501140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.501273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.501316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.501463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.501490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.501612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.501638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.501813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.501869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.502964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.502991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.503185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.503360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.503387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.503532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.503558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.503737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.503762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.503908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.503935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.504057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.504082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.504223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.504255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.504439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-07-25 04:16:44.504464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-07-25 04:16:44.504596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.504624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.504792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.504981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.505152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.505368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.505543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.505710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.505852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.505877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.506022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.506065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.506254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.506311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.506467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.506494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.506617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.506643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.506798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.506841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.507860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.507913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.508045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.508074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.508273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.508299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.508471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.508499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.508657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.508685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.508849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.508875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.509020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.509066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.509224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.509259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.509453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.509479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.509683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.509733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.509899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.509927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.510124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.510149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.510344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.510374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.510534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.510563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.510728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-07-25 04:16:44.510754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-07-25 04:16:44.510945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.510974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.511136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.511165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.511359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.511385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.511582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.511610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.511764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.511792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.511960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.511986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.512145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.512290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.512320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.512494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.512520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.512669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.512695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.512873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.512899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.513862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.513888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.514035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.514201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.514377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.514610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.514796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.514993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.515040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.515161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.515189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.515338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.515514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.515540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.515747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.515794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.516059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.516113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.516260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.516287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.516437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.516482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.516627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.516670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.516835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.516878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.517024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.517050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.517226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.517263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.517425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.517470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.517664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.517693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.517882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.517926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.518102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-07-25 04:16:44.518128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-07-25 04:16:44.518254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.518281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.518414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.518459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.518630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.518675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.518878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.518924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.519046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.519073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.519220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.519253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.519428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.519455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.519621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.519650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.519817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.520063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.520123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.520310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.520337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.520508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.520549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.520803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.520853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.521014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.521042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.521205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.521234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.521389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.521416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.521611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.521640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.521796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.521824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.522117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.522318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.522469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.522640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.522828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.522989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.523192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.523344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.523484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.523701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.523922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.523951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.524137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.524346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.524527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.524669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.524844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.524984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.525028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.525181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.525368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.525396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.525510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-07-25 04:16:44.525536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-07-25 04:16:44.525708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.525734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.525876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.525905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.526068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.526272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.526435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.526586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.526816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.526967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.527162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.527334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.527471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.527641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.527832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.527861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.528024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.528052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.528220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.528264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.528456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.528482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.528719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.528770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.528935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.528965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.529118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.529147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.529325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.529351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.529475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.529502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.529684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.529713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.529945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.529973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.530103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.530132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.530331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.530357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.530513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.530539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.530676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.530706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.530881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.530907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.531186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.531215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.531373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.531400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.531594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.531623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.531782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.531811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.531972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.532172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.532319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.532542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.532708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-07-25 04:16:44.532896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-07-25 04:16:44.532925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.533081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.533110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.533316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.533343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.533480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.533506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.533620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.533662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.533793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.533821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.534968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.534996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.535154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.535182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.535326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.535353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.535469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.535495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.535633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.535672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.535845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.535893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.536091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.536135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.536266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.536295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.536424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.536450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.536591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.536635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.536807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.536852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.537204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.537444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.537636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.537852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.537974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.538146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.538356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.538569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.538744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.538946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.538972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.539147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.539173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.539344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.539390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.539536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.539581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.539754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.539803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.539965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.539994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-07-25 04:16:44.540179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-07-25 04:16:44.540208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.540371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.540401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.540562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.540592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.540754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.540783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.540919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.540948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.541086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.541114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.541268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.541310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.541423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.541449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.541623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.541665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.541816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.541844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.542895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.542925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.543058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.543087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.543276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.543319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.543442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.543468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.543640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.543665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.543860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.543888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.544047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.544076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.544253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.544279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.544423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.544449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.544621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.544649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.544836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.544864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.545901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.545929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.546053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.546082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.546215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-07-25 04:16:44.546250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-07-25 04:16:44.546395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.546421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.546582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.546611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.546738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.546766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.546953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.546982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.547141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.547169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.547350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.547376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.547521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.547547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.547713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.547742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.547884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.547912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.548944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.548986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.549170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.549198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.549369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.549395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.549567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.549595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.549768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.549794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.549912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.549939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.550139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.550168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.550301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.550327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.550505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.550531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.550721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.550750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.550910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.550943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.551069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.551097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.551254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.551298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.551424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.551450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.551566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.551608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.551805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.551847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.552105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.552134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.552302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.552329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.552501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.552544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.552706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.552735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.552898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.552928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.553118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.553147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.553315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.553341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.553537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.553565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.361 qpair failed and we were unable to recover it. 00:33:29.361 [2024-07-25 04:16:44.553758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.361 [2024-07-25 04:16:44.553787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.554925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.554953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.555143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.555172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.555321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.555347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.555492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.555518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.555632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.555658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.555828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.556897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.556940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.557964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.557990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.558122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.558148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.558265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.558291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.558439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.558465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.558635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.558669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.558842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.558867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.559045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.559202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.559412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.559632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.559828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.559996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.560022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.560172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.560215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.560367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.560393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.560534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.560560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.362 [2024-07-25 04:16:44.560726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.362 [2024-07-25 04:16:44.560756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.362 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.560885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.560913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.561098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.561493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.561655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.561842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.561978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.562170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.562307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.562511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.562679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.562849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.562892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.563923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.563951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.564087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.564115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.564269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.564296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.564508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.564537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.564711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.564740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.564885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.564910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.565058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.565101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.565232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.565275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.565429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.565455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.565596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.565621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.565788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.565817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.566954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.566982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.567124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.567149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.567325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.567352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.567513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.567555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.567695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.567724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.567874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.363 [2024-07-25 04:16:44.567900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.363 qpair failed and we were unable to recover it. 00:33:29.363 [2024-07-25 04:16:44.568054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.568099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.568266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.568292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.568463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.568488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.568679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.568705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.568855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.568881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.569898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.569924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.570951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.570977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.571148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.571377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.571573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.571729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.571868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.571983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.572160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.572366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.572557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.572751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.572971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.572999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.573195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.573220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.573365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.573391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.573536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.573561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.573678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.573703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.573851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.573894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.574065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.574091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.574237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.574287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.574435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.364 [2024-07-25 04:16:44.574461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.364 qpair failed and we were unable to recover it. 00:33:29.364 [2024-07-25 04:16:44.574609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.574637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.574800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.574826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.575026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.575274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.575304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.575449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.575475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.575663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.575692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.575865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.575891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.576928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.576954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.577093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.577119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.577306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.577335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.577499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.577528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.577714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.577739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.577880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.577909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.578074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.578102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.578275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.578496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.578525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.578681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.578710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.578887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.578913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.579100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.579287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.579456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.579642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.579843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.365 [2024-07-25 04:16:44.579988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.365 [2024-07-25 04:16:44.580014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.365 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.580158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.580183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.580364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.580393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.580557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.580583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.580762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.580815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.580947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.580976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.581164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.581194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.581403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.581429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.581589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.581615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.581763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.581790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.581931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.581961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.582157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.582186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.582333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.582359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.582508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.582553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.582684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.582712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.582878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.582904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.583896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.583925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.584066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.584092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.584248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.584482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.584508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.584677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.584705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.584868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.584897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.585913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.585942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.586091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.586116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.586271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.586298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.586469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.366 [2024-07-25 04:16:44.586498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.366 qpair failed and we were unable to recover it. 00:33:29.366 [2024-07-25 04:16:44.586667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.586693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.586801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.586844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.587884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.587909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.588047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.588430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.588832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.588991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.589178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.589360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.589506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.589709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.589906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.589932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.590091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.590120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.590321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.367 [2024-07-25 04:16:44.590347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.367 qpair failed and we were unable to recover it. 00:33:29.367 [2024-07-25 04:16:44.590494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-07-25 04:16:44.590520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-07-25 04:16:44.590665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-07-25 04:16:44.590691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-07-25 04:16:44.590804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-07-25 04:16:44.590830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.590948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.590974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.591088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.591287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.591346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.591551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.591759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.591791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.591958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.591988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.592133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-07-25 04:16:44.592162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-07-25 04:16:44.592309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.592337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.592483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.592509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.592631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.592657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.592784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.592826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.592963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.593159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.593185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-07-25 04:16:44.593307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-07-25 04:16:44.593349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.593503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.593532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.593698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.593724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.593852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.593877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.594001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.594026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.594168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.594196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.594373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-07-25 04:16:44.594399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-07-25 04:16:44.594545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.594588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.594778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.594804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.594960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.594988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.595146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.595175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.595344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.595370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.595513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.595539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.595676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.595705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.595887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.595913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.596035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.596060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.596167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.596196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.596321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.596347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.596472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-07-25 04:16:44.596498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-07-25 04:16:44.596670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.596698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.596832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.596858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.597032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.597267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.597485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.597650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.597849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.597999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.598025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.598171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.598214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.598376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.598420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.598600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.598628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.598840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.598870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.598999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.599189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.599345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.599566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.599742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.599881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.599906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.600044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.600072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.600261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.600304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.600455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.600481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.600693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.600743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.600912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.600937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.601099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.601127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.601291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.601325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.601472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.601497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.601646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-07-25 04:16:44.601689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-07-25 04:16:44.601893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.601948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.602146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.602172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.602315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.602346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.602475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.602503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.602661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.602686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.602833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.602858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.603908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.603933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.604925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.604951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.605956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.605984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.606150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.606178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.606375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.606401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.606519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.606544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.606663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.606688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.606812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.606838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.607879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.607904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.608053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.608079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.608266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.608488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.608514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.608698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.608727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.608972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.609178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.609387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.609548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.609728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.609882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.609908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.610055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.610081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.610267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.610293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.610410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.610436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.610559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-07-25 04:16:44.610585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-07-25 04:16:44.610703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.610728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.610879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.610922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.611112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.611315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.611469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.611706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.611872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.611999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.612025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.612196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.612239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.612407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.612433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.612621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.612649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.612869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.612922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.613112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.613137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.613275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.613304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.613445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.613473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.613639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.613664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.613830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.613862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.614062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.614231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.614594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.614819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.614973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.615204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.615415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.615590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.615776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.615947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.615972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.616121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.616163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.616348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.616377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.616546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.616572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.616731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.616760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.616897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.616924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.617100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.617125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.617274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.617318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.617460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.617489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.617636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.617661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.617806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.617847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.618865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.618890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.619064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-07-25 04:16:44.619092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-07-25 04:16:44.619263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.619289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.619409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.619434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.619576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.619605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.619810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.619835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.619950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.619976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.620153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.620346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.620500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.620715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.620863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.620986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.621183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.621373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.621521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.621767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.621933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.621959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.622973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.622999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.623150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.623176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.623332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.623375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.623540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.623567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-07-25 04:16:44.623727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-07-25 04:16:44.623756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.623896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.623924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.624098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.624124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.624283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.624313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.624477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.624506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.624678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.624704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.624820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.624845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.625971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.625998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.626153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.626181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.626364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.626394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.626520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.626545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.626724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.626754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.626926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.626951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.627937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.627962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.628070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.628113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.628239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.628422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.628448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.628616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.628660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.628831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.628860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.629868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.629898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.630072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.630224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.630436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.630627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.630825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.630978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.631007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.631203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.631229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.631408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.631437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.631568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.631596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.631773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.631799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.631974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.632027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-07-25 04:16:44.632211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-07-25 04:16:44.632236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.632369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.632395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.632514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.632540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.632664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.632691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.632872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.633955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.633981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.634969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.634995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.635122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.635164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.635327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.635356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.635502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.635528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.635677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.635702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.635881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.635911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.636949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.636979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.637138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.637310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.637495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.637664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.637848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.637977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.638160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.638315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.638466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.638658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.638803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.638845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-07-25 04:16:44.639964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-07-25 04:16:44.639992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.640125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.640154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.640331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.640357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.640549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.640577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.640749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.640774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.640921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.640948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.641961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.641987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.642104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.642266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.642310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.642436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.642462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.642662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.642691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.642845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.642874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.643891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.643934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.644917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.644943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.645966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.645994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-07-25 04:16:44.646927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-07-25 04:16:44.646953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.647113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.647140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.647289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.647316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.647465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.647491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.647674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.647700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.647823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.647848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.648926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.648951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.649927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.649952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.650102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.650131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.650318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.650344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.650462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.650493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.650645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.650674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.650811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.651899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.651925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.652043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.652069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.652210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.652236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.652372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-07-25 04:16:44.652397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-07-25 04:16:44.652525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.652551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.652719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.652761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.652903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.652929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.653077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.653103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.653272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.653302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.653458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.653483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.653640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.653669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.653845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.653870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.654938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.654967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.655960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.655989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.656153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.656179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.656320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.656364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.656535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.656564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.656716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.656742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.656862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.656888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.657070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.657099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.657272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.657298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.657482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.657511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.657639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.657667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.657839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.657871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.658036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.658065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.658260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.658289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.658458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.658483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.658697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.658726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.658846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-07-25 04:16:44.658875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-07-25 04:16:44.659016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.659202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.659441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.659638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.659803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.659968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.659997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.660151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.660177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.660299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.660337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.660468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.660493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.660665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.660690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.660850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.661919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.661944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.662138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.662337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.662504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.662690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.662854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.662981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.663127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.663292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.663466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.663658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.663850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.663875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.664070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.664100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.664294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.664323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.664496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.664522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.664700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.664729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.664892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.664921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.665065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.665092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.665269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.665299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.665491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.665520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.665688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.665715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.665900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.665929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.666944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.666969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-07-25 04:16:44.667162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-07-25 04:16:44.667187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.667303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.667330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.667446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.667473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.667651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.667679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.667812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.667838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.667991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.668170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.668360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.668550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.668709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.668905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.668931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.669047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.669089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.669280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.669309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.669454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.669480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.669653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.669679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.669820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.669848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.670943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.670972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.671140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.671286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.671441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.671609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.671821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.671978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.672162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.672358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.672536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.672725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.672869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.672896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.673048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.673074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.673252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.673278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.673422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.673449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.673653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.673682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.673831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.673857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.674007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.674034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.674216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.674251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.674398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-07-25 04:16:44.674424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-07-25 04:16:44.674551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.674596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.674782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.674811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.674975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.675958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.675984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.676951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.677169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.677354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.677381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.677492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.677525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.677677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.677721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.677882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.677911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.678969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.678998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.679162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.679188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.679316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.679342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.679485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.679510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.679667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.679693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.679964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.680178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.680365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.680538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.680702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.680873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.680899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.681861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.681975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.682147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.682348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.682497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.682671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.682857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.683098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.683311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.683457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.683633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.683834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.683974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.684824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.684998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.685179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.685396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.685548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.685725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-07-25 04:16:44.685912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-07-25 04:16:44.685953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.686121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.686147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.686319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.686358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.686560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.686590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.686758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.686784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.686962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.687211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.687399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.687601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.687768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.687934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.687961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.688091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.688135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.688287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.688314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.688437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.688463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.688604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.688646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.688849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.688874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.689945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.689974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.690138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.690311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.690480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.690667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.690839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.690990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.691180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.691359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.691513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.691722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.691920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.691972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.692108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.692137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.692312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.692338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.692491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.692516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.692672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.692717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.692861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.692887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.693031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.693057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.693224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.693265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.693411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.693438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.693612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.693654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.693863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.693890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.694904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.694933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.695100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.695125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.695280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.695308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.695430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.695456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.695609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.695635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.695781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.695809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.696917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.696942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.697275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.697440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.697609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.697827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.697974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.698182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.698366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.698517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.698686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.698888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.698914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.699087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-07-25 04:16:44.699112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-07-25 04:16:44.699263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.699290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.699437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.699463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.699598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.699627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.699763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.699796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.699958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.699984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.700963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.700990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.701924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.701963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.702934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.702960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.703115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.703142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.703308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.703577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.703622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.703831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.703880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.704038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.704064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.704182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.704209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.704371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.704415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.704557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.704595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.704778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.704817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.705027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.705077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.705208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.705237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.705425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.705453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.705679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.705728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.705867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.705896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.706056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.706087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.706299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.706328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.706503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.706546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.706746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.706789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.706987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.707015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.707159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.707186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.707366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.707411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.707583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.707626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.707776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.707820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.708059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.708233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.708427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.708638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.708879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.708999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.709025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.709195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.709221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.709372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.709415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.709613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.709643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.709880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.709923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.710102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.710298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.710492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.710681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.710835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.710999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.711160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.711358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.711513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.711696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.711856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.711885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.712080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.712109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.712281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.712433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.712459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-07-25 04:16:44.712642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-07-25 04:16:44.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.712845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.712894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.713083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.713267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.713312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.713438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.713463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.713712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.713741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.714035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.714085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.714258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.714287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.714470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.714496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.714649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.714676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.714805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.714831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.715033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.715062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.715207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.715232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.715395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.715422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.715598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.715627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.715839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.715893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.716057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.716086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.716268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.716295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.716434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.716460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.716581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.716623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.716786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.716814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.717000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.717046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.717206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.717235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.717427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.717456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.717658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.717696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.717943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.717992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.718182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.718226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.718381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.718407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.718583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.718609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.718773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.718817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.719903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.719932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.720157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.720188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.720344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.720371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.720534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.720563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.720694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.720884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.720912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.721138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.721323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.721499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.721676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.721851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.721999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.722028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.722190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.722219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.722417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.722443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.722610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.722639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.722768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.722796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.722979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.723175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.723356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.723545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.723709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.723917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.723967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.724155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.724184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.724329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.724480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.724506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.724695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.724736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.724923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.724951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.725088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.725116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.725272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.725314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.725459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.725485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-07-25 04:16:44.725596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-07-25 04:16:44.725639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.725824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.725853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.725985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.726170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.726359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.726554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.726740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.726923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.726952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.727098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.727141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.727338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.727364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.727508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.727551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.727725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.727751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.727875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.727919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.728114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.728143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.728320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.728346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.728517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.728543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.728743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.728772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.728945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.728989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.729188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.729217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.729350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.729377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.729520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.729562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.729757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.729786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.729971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.730187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.730394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.730550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.730692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.730853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.730882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.731035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.731063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.731191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.731219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.731281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60a470 (9): Bad file descriptor 00:33:29.710 [2024-07-25 04:16:44.731479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.731518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.731689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.731717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.731916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.732954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.732980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.733128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.733154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.733333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.733359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.733558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.733726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.733782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.733934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.733963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.734153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.734311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.734460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.734641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.734807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.734976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.735133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.735329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.735647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.735925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.735968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.736147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.736175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.736345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.736371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.736498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.736527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.736648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.736674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.736816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.736842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.737953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.737981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.738159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.738187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.738363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.738390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.738530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.738728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.738753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.738873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.738914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-07-25 04:16:44.739081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-07-25 04:16:44.739110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.739254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.739281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.739425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.739451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.739611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.739640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.739890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.739940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.740072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.740269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.740310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.740483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.740509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.740645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.740674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.740817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.740845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.741903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.741932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.742098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.742126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.742307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.742334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.742477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.742502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.742674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.742703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.742869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.742897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.743087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.743115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.743294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.743321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.743496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.743539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.743705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.743734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.743937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.743966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.744125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.744155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.744316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.744343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.744491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.744517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.744680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.744709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.744876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.744905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.745115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.745144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.745300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.745473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.745499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.745654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.745680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.745819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.745847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.746066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.746295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.746466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.746639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.746849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.746981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.747011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.747168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.747196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.747363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.747390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.747517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.747543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.747716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.747741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.747984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.748229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.748430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.748598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.748755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.748957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.748987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.749177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.749205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.749351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.749378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.749506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.749536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.749658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.749684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.749822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.749866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.750892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.750918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.751032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.751058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.751215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.751251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-07-25 04:16:44.751410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-07-25 04:16:44.751436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.751570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.751600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.751772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.751798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.751921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.751965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.752151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.752179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.752347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.752373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.752533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.752561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.752749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.752778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.752940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.752966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.753107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.753133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.753284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.753327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.753496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.753522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.753642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.753685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.753873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.753901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.754060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.754086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.754258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.754288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.754443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.754476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.754666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.754692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.754852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.754881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.755942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.756118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.756143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.756313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.756340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.756507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.756536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.756722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.756751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.756908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.756934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.757114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.757158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.757320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.757349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.757494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.757520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.757647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.757672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.757821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.757847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.758026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.758257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.758451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.758622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.758838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.758996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.759190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.759366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.759584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.759780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.759968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.759996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.760182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.760211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.760383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.760409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.760540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.760565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.760688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.760714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.760899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.760925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.761113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.761301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.761455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.761636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.761849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.761992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.762224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.762423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.762590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.762776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.762959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.762988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.763150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.763178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.763329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.763355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.763475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.763696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.763725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.763980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.764178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.764351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.764541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.764729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.764926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.764954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-07-25 04:16:44.765142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-07-25 04:16:44.765171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.765322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.765348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.765499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.765524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.765650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.765675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.765795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.765820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.765964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.765991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.766154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.766183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.766332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.766358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.766529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.766555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.766739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.766768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.766961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.766990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.767126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.767152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.767307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.767350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.767510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.767538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.767734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.767760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.767931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.767959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.768126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.768155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.768296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.768323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.768445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.768487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.768680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.768709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.768874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.768901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.769041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.769071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.769252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.769278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.769420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.769446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.769592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.769636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.769821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.769849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.770945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.770973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.771118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.771143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.771285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.771311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.771492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.771520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.771685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.771710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.771837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.771863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.772903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.772932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.773942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.773968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.774143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.774171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.774343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.774370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.774483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.774509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.774647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.774675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.774842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.774871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.775956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.775981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.776147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.776175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.776336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.776367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.776531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.776557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.776743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.776772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.776902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.776930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.777098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.777124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.777319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.777349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.777509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.777538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.777677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.777702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.777896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.777925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.778089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-07-25 04:16:44.778118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-07-25 04:16:44.778256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.778283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.778432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.778458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.778606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.778634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.778777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.778803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.778950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.778995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.779152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.779181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.779355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.779382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.779561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.779590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.779728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.779757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.779925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.779950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.780080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.780125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.780325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.780352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.780476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.780504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.780621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.780647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.780813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.780841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.781957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.781986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.782140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.782166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.782284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.782310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.782492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.782518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.782731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.782757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.782937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.782966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.783125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.783154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.783347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.783374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.783540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.783570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.783733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.783762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.783935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.783961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.784129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.784158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.784305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.784334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.784478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.784504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.784652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.784678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.784843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.784871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.785930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.785958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.786097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.786123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.786279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.786335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.786507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.786538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.786732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.786759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.786955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.786984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.787170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.787198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.787401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.787429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.787644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.787672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.787858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.787892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.788926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.788955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.789128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.789315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.789466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.789636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.789820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.789978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.790007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.790169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.790195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.790327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-07-25 04:16:44.790353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-07-25 04:16:44.790491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.790517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.790666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.790692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.790839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.791887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.791913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.792110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.792292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.792461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.792652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.792853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.792996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.793037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.793204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.793233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.793408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.793434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.793602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.793633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.793800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.793830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.793980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.794127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.794308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.794458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.794641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.794836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.794865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.795049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.795252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.795470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.795629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.795800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.795967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.796204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.796376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.796550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.796739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.796888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.796915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.797093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.797122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.797285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.797312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.797431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.797457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.797662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.797690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.797838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.797863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.798044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.798206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.798386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.798812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.798975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.799119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.799286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.799535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.799720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.799910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.799938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.800107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.800133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.800280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.800311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.800454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.800483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.800649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.800675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.800868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.800896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.801946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.801972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.802114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.802143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.802304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.802330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.802457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.802482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.802610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.802635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.802830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.802856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.803017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.803046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.803210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.803238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.803427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-07-25 04:16:44.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-07-25 04:16:44.803598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.803624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.803827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.803856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.804892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.804922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.805087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.805113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.805231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.805287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.805467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.805496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.805665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.805691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.805858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.805888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.806911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.806956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.807092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.807120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.807314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.807341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.807466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.807491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.807664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.807693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.807864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.807890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.808949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.808975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.809972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.809998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.810966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.810991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.811145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.811189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.811341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.811367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.811506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.811532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.811654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.811680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.811837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.811864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.812889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.812918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.813107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.813311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.813482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.813653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.813844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.813974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.814153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.814340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.814517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.814665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.814849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.814878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-07-25 04:16:44.815033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-07-25 04:16:44.815062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.815255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.815284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.815418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.815443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.815590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.815618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.815750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.815776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.815972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.816161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.816356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.816498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.816723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.816949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.816975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.817159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.817187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.817351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.817530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.817673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.817699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.817879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.817908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.818959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.818987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.819925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.819951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.820159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.820185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.820324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.820350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.820467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.820493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.820664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.820690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.820885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.820929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.821918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.821944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.822127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.822343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.822497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.822673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.822850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.822972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.823014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.823196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.823225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.823380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.823407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.823532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.823560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.823738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.823764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.823977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.824167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.824359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.824511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.824713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.824881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.824916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.825088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.825118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.825334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.825361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.825489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.825515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.825665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.825692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.825866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.825895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.826053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.826222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.826415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.826586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.826806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.826977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.827167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.827356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.827535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.827708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-07-25 04:16:44.827885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-07-25 04:16:44.827910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.828056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.828085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.828297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.828324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.828466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.828493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.828684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.828713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.828843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.828887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.829012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.829041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.829219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.829265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.829442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.829469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.829619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.829649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.829815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.829844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.830905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.830933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.831917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.831943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.832070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.832097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.832286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.832315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.832483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.832513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.832692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.832719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.832880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.832906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.833078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.833108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.833286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.833314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.833464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.833490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.833637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.833665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.833891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.833920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.834080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.834109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.834289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.834315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.834460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.834504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.834684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.834712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.834900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.834929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.835082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.835111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.835257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.835311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.835430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.835456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.835633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.835659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.835827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.835856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.836078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.836116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.836262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.836288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.836435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.836461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.836613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.836656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.836833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.836892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.837034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.837062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.837232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.837270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.837462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.837488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.837649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.837675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.837818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.837850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.838054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.838248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.838415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.838627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.838837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.838993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.839022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.839253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.839298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.839453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.839623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.839653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.839815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.839844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.840025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.840051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.840236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.840272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.840419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.840445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.840623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.840675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.840846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.840887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.841099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.841135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.841350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.841377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.841549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.841575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.841695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.841721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-07-25 04:16:44.841931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-07-25 04:16:44.841975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.842114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.842144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.842288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.842314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.842462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.842489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.842646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.842675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.842831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.843922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.843951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.844147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.844176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.844322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.844348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.844501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.844528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.844675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.844701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.844842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.844868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.845056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.845263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.845438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.845616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.845824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.845969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.846152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.846351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.846508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.846660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.846839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.846883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.847968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.847994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.848144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.848309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.848460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.848673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.848987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.849161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.849352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.849521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.849668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.849845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.849871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.850909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.850936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.851844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.851870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.852857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.852894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.853067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-07-25 04:16:44.853222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-07-25 04:16:44.853265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.853384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.853411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.853556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.853582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.853732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.853758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.853885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.853926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.854142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.854168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.854326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.854353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.854500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.854526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.854742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.854784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.855013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.855061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.855257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.855301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.855421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.855447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.855575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.855602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.855745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.855779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.856592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.856626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.856797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.856825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.856969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.857190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.857371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.857553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.857750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.857921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.857949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.858099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.858126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.858254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.858281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.858405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.858431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.858560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.858590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.859474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.859505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.859704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.859731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.860410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.860440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.860617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.860643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.860771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.860797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.860923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.860949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.861968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.861994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.862119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.862145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.862299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.862340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.862498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.862525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.862654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.862691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.862844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.862870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.863860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.863983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.864158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.864326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.864465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.864659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.864827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.864856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.865893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.865919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.866920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.866946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.867104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.867130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.867278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.867305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.867436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.867463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-07-25 04:16:44.867576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-07-25 04:16:44.867602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.867735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.867765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.867910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.867936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.868969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.868996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.869141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.869167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.869315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.869354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.869479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.869507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.869641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.869669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.869823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.869854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.870927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.870953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.871902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.871929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.872861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.872987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.873958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.873984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.874950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.874976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.875967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.875994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.876154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.876333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.876490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.876663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.876854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.876994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.877168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.877324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.877463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.877639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.877828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.877854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.878004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.878048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.878177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-07-25 04:16:44.878207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-07-25 04:16:44.878376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.878407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.878529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.878567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.878747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.878787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.878948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.878979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.879150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.879194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.879344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.879371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.879500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.879712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.879754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.879927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.879975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.880133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.880162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.880321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.880347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.880487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.880525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.880650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.880678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.880934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.880965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.881132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.881187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.881366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.881406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.881539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.881578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.881707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.881735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.881901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.881946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.882919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.882950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.883095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.883306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.883445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.883582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.883760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.883959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.884140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.884315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.884463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.884616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.884817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.884853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.885086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.885256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.885427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.885573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.885774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.885994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.886948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.886977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.887132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.887321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.887347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.887463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.887489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.887650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.887676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.887823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.887851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.888871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.888897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.889889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.889918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.890059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.890104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.890290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.890317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.890436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.890461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.890614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-07-25 04:16:44.890641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-07-25 04:16:44.890762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.890787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.890965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.891157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.891329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.891497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.891681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.891854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.891880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.892877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.892904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.893869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.893895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.894960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.894986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.895842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.895868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.896872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.896987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.897949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.897977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.898842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.898875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.899963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.900938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.900963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-07-25 04:16:44.901867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-07-25 04:16:44.901893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.902898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.903973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.903999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.904147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.904173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.904313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.904341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.904477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.904504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.904652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.904678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.904825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.904870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.905022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.905048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.905177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.905204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.905356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.905391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.905557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.905605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.905814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.905840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.906000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.906026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.906168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.906193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.906321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.906348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.906462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.906488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.906999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.907186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.907361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.907516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.907675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.907838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.907864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.908834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.908987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.909167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.909344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.909518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.909713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.909921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.910186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.910350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.910502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.910646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.910823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.910992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.911019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.911137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-07-25 04:16:44.911164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-07-25 04:16:44.911311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.911478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.911504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.911684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.911710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.911833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.911859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.912869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.912896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.913823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.913980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.914174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.914328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.914479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.914660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.914873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.914902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.915168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.915364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.915512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.915688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.915838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.915967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.916967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.916993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.917178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.917340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.917484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.917633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.917821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.917993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.918196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.918348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.918497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.918659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.918866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.918895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.919880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.919934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.920966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.920993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.921914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.921940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.922079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.922118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.922306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.922371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.922519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.922599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.922783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-07-25 04:16:44.922815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-07-25 04:16:44.922976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.923152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.923352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.923498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.923664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.923822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.923850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.924030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.924076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.924247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.924275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.925443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.925476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.925671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.925700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.926589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.926620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.926856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.926885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.927856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.927884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.928065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.928264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.928421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.928579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.928790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.928961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.929188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.929352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.929506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.929700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.929848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.929875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.930020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.930068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.930292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.930319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.930447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.930473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-07-25 04:16:44.930639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-07-25 04:16:44.930671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.930880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.930929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.931137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.931166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.931346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.931375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.931511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.931555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.931749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.931775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.931970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.932192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.932392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.932541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.932719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.932940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.932987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.933146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.933175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.933334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.933362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.933492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.933519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.933647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.933675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.933840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.933866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.934922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.934951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.935167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.935193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.935322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.935349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.935472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.935498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.935621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.935647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.935767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.935800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.936849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.936878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.937020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.937062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.937203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.937229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.937360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-07-25 04:16:44.937512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-07-25 04:16:44.937538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.937708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.937734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.937860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.937886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.938872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.938898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.939085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.939127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.939301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.939328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.939444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.939471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.939652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.939695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.939921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.939968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.940935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.940964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.941089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.941117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.941260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.941305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.941449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.941489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.941633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.941677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.941816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.941860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.942917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.942956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.943077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.943105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.943227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.943258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.943383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.943410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.943534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.943561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.943737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.943772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.944028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.944057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.944185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.944215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-07-25 04:16:44.944368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-07-25 04:16:44.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.944524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.944550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.944700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.944726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.944904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.944933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.945149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.945336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.945487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.945673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.945867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.945997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.946164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.946341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.946518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.946747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.946921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.946947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.947128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.947157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.947320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.947346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.947470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.947496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.947693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.947722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.947879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.947912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.948887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.948929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.949123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.949151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.949306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.949332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.949456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.949482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.949660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.949703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.949862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.949890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.950108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.950308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.950501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.950682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.950837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.950983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.951026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.951264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-07-25 04:16:44.951292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-07-25 04:16:44.951420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.951447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.951570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.951597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.951766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.951796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.951975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.952181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.952374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.952519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.952692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.952915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.952963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.953110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.953138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.953298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.953324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.953445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.953471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.953643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.953671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.953873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.953921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.954146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.954175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.954336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.954363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.954487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.954513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.954665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.954691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.954841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.954882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.955907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.955949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.956115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.956159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.956305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.956335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.956452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.956479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.956641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.956838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.956885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.957076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-07-25 04:16:44.957102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-07-25 04:16:44.957282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.957325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.957450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.957476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.957610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.957652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.957790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.957818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.958956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.958998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.959159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.959346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.959497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.959523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.959670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.959830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.959858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.960823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.960850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.961049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.961079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.961270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.961326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.961474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.961512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.961688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.961716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.961865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.961892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.962952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.962979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.963100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.963141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.963307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.963333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.963453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.963484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.963642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.963671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-07-25 04:16:44.963873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-07-25 04:16:44.963899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.964052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.964257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.964600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.964808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.964985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.965177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.965353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.965502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.965654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.965816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.965841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.966854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.966995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.967129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.967340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.967487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.967656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.967884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.967913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.968923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.968952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.969944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.969969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.970097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.970126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.970324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.970350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.970496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-07-25 04:16:44.970522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-07-25 04:16:44.970640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.970670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.970793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.970836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.971886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.971914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.972904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.972930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.973088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.973270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.973408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.973625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.973850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.973995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.974170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.974355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.974503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.974710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.974911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.974938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.975117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.975160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.975331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.975358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.975480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.975683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.975739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.975892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.975942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.976111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.976137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.976309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.976349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.976478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.976508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.976724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.976751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.976902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.976929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.977076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-07-25 04:16:44.977103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-07-25 04:16:44.977261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.977408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.977435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.977576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.977606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.977780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.977807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.977980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.978009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.978171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.978214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.978377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.978404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.978580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.978624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.978829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.978875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.979841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.980012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.980038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.980185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.980211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.980424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.980450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.980601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.980627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.980775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.980802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.981893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.981919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.982932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.982960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.983084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.983110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.983236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-07-25 04:16:44.983270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-07-25 04:16:44.983400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.983427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.983625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.983654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.983813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.983839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.984931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.984957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.985083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.985111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.985317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.985344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.985501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.985527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.985639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.985682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.985848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.986894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.986920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.987083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.987111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.987273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.987315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.987463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.987489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.987695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.987721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.987892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.987937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.988105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.988135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.988305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.988331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.988452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.988494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.988623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.988652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.988822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.988847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.989011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-07-25 04:16:44.989039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-07-25 04:16:44.989170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.989199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.989362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.989388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.989507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.989550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.989710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.989739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.989931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.989956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.990151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.990299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.990470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.990652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.990829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.990977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.991920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.991946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.992108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.992313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.992513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.992646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.992814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.992983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.993174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.993375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.993543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.993734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.993900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.993928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.994960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.994986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.995161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.995187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.995315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.995341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.995511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.995536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.995680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.995706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-07-25 04:16:44.995843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-07-25 04:16:44.995872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.996055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.996083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.996254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.996280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.996449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.996475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.996663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.996691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.996868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.996893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.997038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.997064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.997187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.997213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.997394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.997420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.997606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.997634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.997852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.997906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.998108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.998133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.998334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.998360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.998477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.998504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.998650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.998675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.998912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.998959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.999146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.999174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.999329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.999355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.999498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.999708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.999736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:44.999898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:44.999924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.000938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.000964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.001155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.001184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.001375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.001404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.001575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.001601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.001771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.001799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.001963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.001992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.002193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.002330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.002504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.002673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.002848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.002993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.003026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-07-25 04:16:45.003206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-07-25 04:16:45.003232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.003368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.003393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.003583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.003612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.003755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.003781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.003907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.003932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.004074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.004293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.004458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.004621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.004808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.004998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.005191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.005345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.005522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.005698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.005888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.005913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.006060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.006104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.006272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.006314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.006489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.006515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.006678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.006707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.006899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.006925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.007082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.007108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.007254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.007284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.007447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.007476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.007642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.007667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.007815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.007840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.008970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.008995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.009158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.009187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.009351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.009377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.009528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.009554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.009732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.009759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.009907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.009933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.010084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.010111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-07-25 04:16:45.010268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-07-25 04:16:45.010297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.010455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.010480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.010642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.010675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.010809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.010837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.011886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.011912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.012851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.012877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.013051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.013246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.013442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.013608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.013831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.013986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.014151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.014327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.014536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.014757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.014950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.014979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.015168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.015197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.015355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.015497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.015528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.015700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.015743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.015889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.015915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.016101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.016273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.016634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.016819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.016982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-07-25 04:16:45.017007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-07-25 04:16:45.017200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.017228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.017398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.017427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.017596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.017622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.017788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.017816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.017984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.018195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.018356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.018572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.018769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.018967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.018997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.019155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.019183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.019319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.019466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.019508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.019673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.019699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.019874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.019899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.020070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.020098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.020256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.020301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.020453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.020479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.020639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.020668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.020839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.020867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.021918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.021946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.022137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.022162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.022325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.022515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.022690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.022715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.022859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.022885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.023040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-07-25 04:16:45.023067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-07-25 04:16:45.023260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.023290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.023412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.023437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.023592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.023785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.024928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.024954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.025105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.025132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.025316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.025487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.025512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.025704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.025733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.025902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.025930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.026968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.026996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.027173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.027202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.027379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.027405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.027579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.027608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.027745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.027771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.027935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.028120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.028149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.028315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.028348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.028491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.028533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.028705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.028734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.028899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.028927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.029123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.029152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.029319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.029345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.029495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.029521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.029660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.029686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.029810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.029836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.030018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.030044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.030172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-07-25 04:16:45.030199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-07-25 04:16:45.030357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.030383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.030505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.030531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.030645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.030853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.030882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.031965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.032112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.032138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.032310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.032336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.032510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.032538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.032703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.032728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.032854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.032880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.033026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.033053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.033233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.033265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.033413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.033443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.033626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.033655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.033825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.033851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.034957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.034983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.035160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.035203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.035328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.035357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.035530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.035556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.035748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.035777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.035927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.035960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.036171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.036341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.036486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.036624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.036818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.036973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.037002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.037175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.037200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-07-25 04:16:45.037361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-07-25 04:16:45.037390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.037587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.037612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.037762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.037787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.037906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.037947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.038079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.038111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.038256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.038282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.038438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.038479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.038638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.038666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.038858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.039045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.039270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.039493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.039654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.039821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.039989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.040015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.040206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.040235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.040445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.040475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.040618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.040644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.040810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.040838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.041037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.041070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.041232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.041265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.041455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.041484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.041651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.041687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.041881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.041907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.042040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.042069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.042224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.042260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.042405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.042430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.042584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.042631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.042817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.042845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.043859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.043887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.044079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.044105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.044303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.044332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.044506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.044533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-07-25 04:16:45.044687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-07-25 04:16:45.044713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.044858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.044901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.045116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.045279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.045457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.045640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.045817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.045968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.046138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.046368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.046557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.046746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.046950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.046975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.047920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.047946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.048113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.048288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.048471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.048622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.048824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.049134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.049315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.049526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.049716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.049869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.049898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.050076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.050105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.050275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.050318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.050466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.050491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-07-25 04:16:45.050687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-07-25 04:16:45.050712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.050824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.050850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.051929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.051958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.052122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.052151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.052326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.052352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.052542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.052570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.052758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.052787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.052951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.052978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.053146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.053174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.053376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.053405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.053549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.053574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.053744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.053786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.053951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.053979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.054145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.054170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.054347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.054376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.054534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.054562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.054725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.054751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.054945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.054973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.055138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.055166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.055344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.055370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.055523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.055548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.055693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.055719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.055859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.055884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.056072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.056101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.056280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.056307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.056458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.056483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.056642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.056685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.056839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.056868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.057033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.057058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.057170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.057195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.057414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.057440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.057601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.057627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.057801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.057826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-07-25 04:16:45.058001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-07-25 04:16:45.058030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.058196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.058222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.058346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.058372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.058499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.058540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.058711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.058738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.058930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.058959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.059968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.059995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.060167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.060385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.060411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.060559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.060585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.060791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.060819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.060955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.060980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.061151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.061322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.061364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.061558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.061588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.061750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.061779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.061938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.061967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.062133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.062158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.062329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.062359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.062543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.062735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.062761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.062878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.062904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.063088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.063116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.063284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.063311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.063467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.063496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.063624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.063653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.063817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.063843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.064031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.064230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.064442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.064637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.064822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.064990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.065016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.065180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-07-25 04:16:45.065209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-07-25 04:16:45.065361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.065387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.065533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.065559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.065686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.065729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.065886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.065913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.066088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.066287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.066449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.066616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.066810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.066999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.067163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.067350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.067485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.067698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.067915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.067944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.068106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.068134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.068288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.068314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.068460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.068502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.068674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.068703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.068876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.068902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.069070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.069100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.069250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.069283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.069424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.069451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.069646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.069675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.069875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.069900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.070015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.070041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.070194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.070237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.070446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.070471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.070631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.070657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.070792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.070821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.071043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.071239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.071441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.071642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.071813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.071990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.072018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.072143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.072172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.072347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-07-25 04:16:45.072374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-07-25 04:16:45.072564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.072592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.072717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.072745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.072911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.072936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.073069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.073297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.073491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.073636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.073807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.073973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.074168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.074366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.074583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.074773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.074957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.074985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.075211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.075240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.075386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.075412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.075556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.075581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.075743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.075768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.075922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.075966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.076132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.076160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.076324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.076500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.076525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.076734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.076762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.076910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.076936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.077084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.077283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.077454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.077648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.077805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.077996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.078854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.078999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.079025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.079174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.079200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.079374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.079417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-07-25 04:16:45.079563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-07-25 04:16:45.079588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.079718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.079743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.079883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.079909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.080902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.080931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.081971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.081999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.082164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.082190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.082308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.082334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.082550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.082575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.082725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.082751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.082914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.082944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.083161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.083355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.083511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.083684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.083873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.083999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.084192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.084406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.084546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.084703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-07-25 04:16:45.084871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-07-25 04:16:45.084897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.085117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.085287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.085451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.085625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.085826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.085992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.086162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.086319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.086493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.086687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.086858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.086886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.087080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.087277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.087469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.087642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.087815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.087995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.088202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.088353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.088509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.088709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.088878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.088907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.089078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.089104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.089283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.089309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.089474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.089504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.089662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.089691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.089849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.089875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.090930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.090956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.091073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.091098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.091275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-07-25 04:16:45.091305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-07-25 04:16:45.091450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.091621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.091663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.091835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.091864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.092054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.092247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.092434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.092634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.092980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.093880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.093996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.094168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.094325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.094499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.094713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.094911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.094940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.095950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.095976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.096164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.096190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.096343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.096369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.096481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.096507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.096710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.096739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.096929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.096955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.097113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.097141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.097296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.097325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.097523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.097548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.097692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.097720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.097849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.097878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.098045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-07-25 04:16:45.098070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-07-25 04:16:45.098265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.098294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.098448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.098477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.098646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.098672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.098834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.098862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.099916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.099957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.100129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.100155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.100264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.100290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.100441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.100483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.100671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.100836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.100862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.101904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.101929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.102929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.102956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.103115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.103144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.103285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.103314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.103486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.103512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.103680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.103709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.103871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.103901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.104084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.104308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.104484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.104665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.104855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.104978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.105007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-07-25 04:16:45.105170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-07-25 04:16:45.105196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.105363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.105390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.105537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.105566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.105743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.105768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.105915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.105959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.106119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.106148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.106295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.106321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.106467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.106493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.106624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.106657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.106851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.106877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.107952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.107979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.108971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.108998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.109150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.109177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.109299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.109327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.109529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.109556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.109731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.109760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.109925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.109955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.110193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.110223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.110395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.110422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.110581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.110610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.110770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.110797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.110945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.110989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.111113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.111142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.111315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.111343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.111468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.111495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.111671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.111711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.111853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.111880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.112079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-07-25 04:16:45.112108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-07-25 04:16:45.112270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.112300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.112454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.112481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.112628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.112664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.112778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.112805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.112916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.112955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.113073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.113099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.113276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.113307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.113451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.113478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.113680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.113709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.113872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.113899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.114053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.114262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.114292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.114459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.114494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.114644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.114670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.114839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.114882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.115079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.115297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.115469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.115634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.115835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.115984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.116181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.116400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.116559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.116719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.116906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.116932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.117044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.117081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.117291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.117321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.117472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.117499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.117688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.117718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-07-25 04:16:45.117858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-07-25 04:16:45.117887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.118056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.118086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.118307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.118335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.118511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.118556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.118709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.118736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.118891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.118934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.119094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.119123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.119301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.119328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.119447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.119508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.119701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.119730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.119912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.119938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.120047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.120260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.120290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.120486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.120512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.120683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.120712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.120846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.120875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.121017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.121045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.121168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.121195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.121366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.121395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.121557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.121595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.121791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.121831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.122938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.122979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.123105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.123134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.123283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.123310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.123447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.123474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.123644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.123675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.123823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.123850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.124010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.124051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.124216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.124261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.124418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.124445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.124617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.124644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.124770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.124813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.125022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.125048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.125162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-07-25 04:16:45.125188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-07-25 04:16:45.125340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.125370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.125513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.125539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.125653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.125680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.125848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.125877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.126943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.126978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.127361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.127548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.127712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.127860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.127979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.128159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.128339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.128521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.128723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.128878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.128923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.129922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.129951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.130137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.130173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.130345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.130372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.130500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.130544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.130706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.130735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.130908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.130934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.131094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.131128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.131287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.131317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.131453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.131481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.131637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.131664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-07-25 04:16:45.131838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-07-25 04:16:45.131868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.132923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.132950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.133151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.133178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.133344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.133374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.133510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.133539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.133683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.133709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.133839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.133866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.134038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.134075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.134246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.134272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.134422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.134449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.134598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.134625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.134820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.134846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.135970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.135997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.136162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.136189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.136389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.136417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.136535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.136562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.136704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.136730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.136865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.136891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.137099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.137126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.137316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.137343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.137515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.137553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.137751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.137778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.137892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.137921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.138075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.138101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.138289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.138317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-07-25 04:16:45.138463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-07-25 04:16:45.138490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.138659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.138689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.138874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.138902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.139060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.139089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.139266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.139305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.139490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.139517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.139653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.139683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.139839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.139873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.140076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.140446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.140624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.140803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.140989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.141163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.141318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.141479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.141633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.141841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.141871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.142961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.142991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.143130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.143171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.143336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.143363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.143570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.143599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.143796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.143822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.143969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.143995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.144122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.144149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.144325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.144355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.144537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.144564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.144728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.144754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.144903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.144935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.145126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.145155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.145330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.145358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.145506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-07-25 04:16:45.145561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-07-25 04:16:45.145765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.145793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.145960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.145990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.146147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.146177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.146353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.146380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.146525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.146568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.146758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.146797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.146970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.146996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.147188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.147217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.147393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.147422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.147555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.147582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.147713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.147741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.147943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.147979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.148137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.148164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.148284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.148328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.148482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.148511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.148678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.148705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.148873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.148902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.149090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.149275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.149445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.149636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.149830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.149980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.150008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.150178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.150208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.150389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.150416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.150607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.150636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.150802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.150973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.151146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.151382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.151548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.151758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.151964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-07-25 04:16:45.151996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-07-25 04:16:45.152137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.152164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.152325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.152370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.152543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.152574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.152699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.152726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.152873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.152903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.153932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.153959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.154961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.154988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.155135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.155164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.155329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.155356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.155482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.155508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.155688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.155732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.155873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.155899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.156094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.156265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.156450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.156648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.156819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.156988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.157169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.157346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.157500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.157703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.157876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.157920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.158091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.158118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.158268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.158312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.158481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.158510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.158668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.158695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.158868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-07-25 04:16:45.158898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-07-25 04:16:45.159061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.159213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.159368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.159579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.159760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.159935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.159962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.160083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.160110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.160301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.160353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.160484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.160513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.160696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.160741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.160942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.160986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.161103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.161130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.161303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.161349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.161484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.161515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.161659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.161689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.161876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.161915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.162198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.162268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.162441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.162467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.162586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.162612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.162773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.162818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.163009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.163051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.163227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.163263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.163432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.163478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.163610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.163653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.163827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.163856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.164024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.164052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.164204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.164231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.164388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.164415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.164644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.164698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.164894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.164924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.165114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.165143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.165298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.165490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.165541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.165688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.165716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.165901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.165933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.166072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.166285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.166313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.166479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.042 [2024-07-25 04:16:45.166510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.042 qpair failed and we were unable to recover it. 00:33:30.042 [2024-07-25 04:16:45.166694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.166724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.166893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.166922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.167081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.167334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.167477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.167629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.167831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.167996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.168026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.168194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.168223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.168401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.168429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.168598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.168627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.168786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.168816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.168990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.169021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.169195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.169225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.169403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.169430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.169606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.169636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.169795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.169825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.170037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.170092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.170236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.170290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.170437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.170464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.170639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.170666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.170850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.170879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.171965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.171995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.172144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.172171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.172331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.172359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.172507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.172562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.172727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.172757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.172919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.172950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.173313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.173459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.173613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.173788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.043 qpair failed and we were unable to recover it. 00:33:30.043 [2024-07-25 04:16:45.173947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.043 [2024-07-25 04:16:45.174004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.174181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.174210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.174377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.174405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.174548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.174575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.174720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.174770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.174913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.174940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.175092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.175295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.175322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.175473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.175501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.175706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.175773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.175937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.175968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.176143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.176171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.176325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.176362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.176527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.176565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.176729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.176759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.176955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.177014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.177206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.177236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.177430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.177457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.177651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.177715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.177874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.177904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.178037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.178067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.178258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.178308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.178486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.178513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.178668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.178712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.178983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.179041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.179191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.179219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.179410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.179437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.179599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.179644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.179782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.179826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.179994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.180039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.180155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.180183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.180377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.180421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.180628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-07-25 04:16:45.180673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-07-25 04:16:45.180932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.180983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.181133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.181161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.181362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.181408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.181613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.181658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.181820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.182950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.182980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.183118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.183149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.183318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.183347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.183498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.183547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.183675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.183705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.183886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.183916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.184074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.184104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.184304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.184332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.184473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.184500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.184653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.184683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.184822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.184852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.185885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.185917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.186085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.186116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.186276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.186319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.186450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.186477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.186638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.186666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.186798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.186827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.187015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.187045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.187234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.187273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.187428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.187455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.187637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.187667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.187849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.187880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.188080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-07-25 04:16:45.188109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-07-25 04:16:45.188295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.188322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.188476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.188503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.188672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.188702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.188855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.188885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.189098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.189128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.189306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.189333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.189455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.189482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.189659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.189686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.189857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.189886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.190082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.190112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.190254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.190293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.190466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.190493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.190698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.190729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.190980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.191223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.191412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.191567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.191743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.191918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.191948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.192175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.192206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.192384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.192412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.192551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.192579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.192744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.192775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.192912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.192942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.193134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.193164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.193303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.193331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.193503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.193545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.193704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.193735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.193892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.193936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.194089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.194119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.194319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.194347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.194469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.194497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.194676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.194703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.194874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.194904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.195040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.195085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.195217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.195255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.195452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.195483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.195646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-07-25 04:16:45.195689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-07-25 04:16:45.195826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.195856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.195987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.196018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.196196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.196237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.196390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.196419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.196575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.196603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.196764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.196809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.197048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.197271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.197482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.197685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.197846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.197996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.198182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.198349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.198534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.198714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.198922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.198949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.199131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.199161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.199304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.199332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.199477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.199505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.199734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.199957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.199987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.200156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.200186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.200327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.200356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.200472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.200500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.200715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.200760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.200903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.200936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.201075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.201110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.201290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.201318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.201493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.201699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.201730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.201923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.201967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.202154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.202184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.202380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.202408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.202556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.202584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.202749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.202781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.202922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.202953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-07-25 04:16:45.203137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-07-25 04:16:45.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.203335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.203369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.203520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.203548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.203719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.203747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.203871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.203899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.204044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.204072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.204223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.204278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.204456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.204650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.204681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.204877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.204908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.205067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.205098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.205261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.205306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.205459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.205488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.205656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.205687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.205889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.205932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.206144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.206175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.206341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.206369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.206489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.206517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.206727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.206755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.206903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.206934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.207099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.207130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.207297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.207477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.207505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.207657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.207689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.207848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.207878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.208023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.208055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.208187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.208219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.208393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.208422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.208556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.208596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.208749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.208795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.209074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.209128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.209305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.209334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.209507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.209551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.209754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.209781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.209964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.210018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.210153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.210183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.210322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.210349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.210479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.210506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-07-25 04:16:45.210660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-07-25 04:16:45.210688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.210834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.210864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.211944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.211974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.212962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.212992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.213157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.213187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.213386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.213414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.213542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.213569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.213717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.213766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.213954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.213984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.214202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.214232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.214410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.214437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.214611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.214654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.214861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.214888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.215087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.215117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.215278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.215308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.215502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.215530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.215815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.215869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.216036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.216066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.216251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.216278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.216408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.216435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.216635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.216665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.216864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.216891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-07-25 04:16:45.217067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-07-25 04:16:45.217097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.217306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.217338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.217534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.217561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.217713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.217740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.217939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.217969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.218125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.218153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.218289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.218334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.218505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.218535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.218705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.218733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.218898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.218928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.219092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.219128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.219272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.219300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.219443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.219487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.219651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.219681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.219878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.219905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.220077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.220107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.220301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.220332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.220475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.220502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.220652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.220696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.220852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.220882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.221916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.221946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.222142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.222173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.222348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.222379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.222512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.222543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.222705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.222732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.222859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.222904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.223092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.223123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.223273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.223301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.223479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.223506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.223721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.223888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.223915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.224063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.224210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.224238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.224414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-07-25 04:16:45.224441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-07-25 04:16:45.224590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.224617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.224751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.224778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.224892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.224919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.225108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.225138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.225299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.225330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.225525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.225710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.225740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.225931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.225962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.226135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.226162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.226364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.226395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.226565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.226593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.226735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.226762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.226902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.226929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.227088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.227115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.227237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.227275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.227431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.227476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.227671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.227698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.227870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.227898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.228068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.228247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.228426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.228628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.228820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.228990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.229019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.229191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.229219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.229378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.229406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.229583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.229611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.229816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.229847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.230072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.230301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.230503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.230669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.230835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.230986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.231013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.231184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.231214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.231371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.231398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.231543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.231569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.231774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-07-25 04:16:45.231804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-07-25 04:16:45.231973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.232154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.232352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.232529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.232754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.232942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.232972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.233140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.233168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.233292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.233337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.233506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.233536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.233704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.233732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.233907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.233936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.234089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.234119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.234287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.234315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.234483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.234514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.234722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.234749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.234895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.234921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.235070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.235097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.235311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.235343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.235519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.235546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.235661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.235688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.235831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.235858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.236924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.236955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.237118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.237145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.237318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.237361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.237504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.237534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.237739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.237766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.237970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.238186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.238392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.238539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.238713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.238848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.238876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.239022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.239065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-07-25 04:16:45.239223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-07-25 04:16:45.239260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.239471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.239497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.239621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.239648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.239792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.239819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.240970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.240997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.241159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.241186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.241333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.241381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.241543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.241574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.241776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.241803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.241993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.242022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.242215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.242253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.242424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.242452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.242646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.242874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.242904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.243074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.243101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.243267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.243298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.243462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.243493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.243637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.243664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.243850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.243877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.244062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.244092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.244255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.244283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.244403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.244614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.244644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.244807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.244834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.245011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.245038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.245218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.245259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.245463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.245490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.245648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.245678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.245839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.245870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.246014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.246041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.246187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.246215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.246350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.053 [2024-07-25 04:16:45.246378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.053 qpair failed and we were unable to recover it. 00:33:30.053 [2024-07-25 04:16:45.246504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.246532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.246704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.246731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.246884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.246929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.247125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.247153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.247325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.247356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.247516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.247546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.247677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.247704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.247821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.247849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.248040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.248070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.248269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.248297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.248469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.248504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.248663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.248693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.248888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.248916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.249051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.249081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.249212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.249261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.249468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.249495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.249622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.249649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.249792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.249819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.250946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.250976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.251148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.251178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.251360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.054 [2024-07-25 04:16:45.251388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.054 qpair failed and we were unable to recover it. 00:33:30.054 [2024-07-25 04:16:45.251551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.251581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.251749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.251777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.251906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.251934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.252059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.252086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.252228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.252262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.252375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.252418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.252574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.252605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.252782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.252809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.253964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.253992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.254111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.254157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.254353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.254385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.254549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.254576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.254768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.254798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.254993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.255023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.255206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.255233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.255412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.255442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.255605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.255635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.255803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.255831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.256935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.256962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.257105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.257260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.257289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.257438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.055 [2024-07-25 04:16:45.257466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.055 qpair failed and we were unable to recover it. 00:33:30.055 [2024-07-25 04:16:45.257658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.257688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.257847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.257878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.258960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.258988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.259168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.259195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.259316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.259344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.259501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.259543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.259707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.259734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.259881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.259908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.260054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.260210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.260372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.260608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.260803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.260996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.261027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.261213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.261263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.261451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.261484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.261622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.261653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.261819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.262018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.262046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.262161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.262204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.262404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.262435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.262607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.262633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.262828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.262858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.263917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.263945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.264121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.264166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.264303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.264331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.264459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.264487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.264639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.056 [2024-07-25 04:16:45.264669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.056 qpair failed and we were unable to recover it. 00:33:30.056 [2024-07-25 04:16:45.264818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.264845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.264993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.265021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.265214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.265252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.265402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.265430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.265579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.265607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.265774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.265804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.265978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.266152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.266372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.266547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.266726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.266931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.266962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.267153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.267180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.267335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.267363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.267513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.267541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.267685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.267713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.267905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.267935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.268071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.268101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.268267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.268296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.268493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.268523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.268682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.268712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.268886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.268913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.269088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.269277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.269449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.269640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.269829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.269999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.270169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.270358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.270539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.270731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.270918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.270949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.271107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.271137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.271306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.271334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.271451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.271626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.271654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.271821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.271851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.057 qpair failed and we were unable to recover it. 00:33:30.057 [2024-07-25 04:16:45.272011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.057 [2024-07-25 04:16:45.272042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.272209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.272237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.272442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.272473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.272665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.272695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.272856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.272883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.273074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.273104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.273299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.273329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.273502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.273529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.273672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.273701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.273873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.273903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.274049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.274077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.274285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.274316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.274478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.274513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.274687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.274715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.274841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.274869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.275903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.275934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.276921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.276951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.277150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.277178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.277328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.277356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.277481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.277526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.277685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.277715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.277861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.277889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.058 [2024-07-25 04:16:45.278965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.058 [2024-07-25 04:16:45.278992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.058 qpair failed and we were unable to recover it. 00:33:30.059 [2024-07-25 04:16:45.279159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.059 [2024-07-25 04:16:45.279191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.059 qpair failed and we were unable to recover it. 00:33:30.059 [2024-07-25 04:16:45.279372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.059 [2024-07-25 04:16:45.279401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.059 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.279533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.279695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.279723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.279874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.279901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.280837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.280976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.281171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.281373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.281548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.281718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.281871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.281898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.282023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.282051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.282194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.282239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.282421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.282631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.282658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.282773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-07-25 04:16:45.282817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-07-25 04:16:45.283008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.283176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.283336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.283573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.283719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.283897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.283942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.284939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.284967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.285124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.285151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.285264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.285292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.285435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.285463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.285638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.285665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.285844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.285872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.286051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.286219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.286410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.286625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.286812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.286989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.287017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.287191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.287219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.287370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.287399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.287596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.287665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.287864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.288087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.288135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.288267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.288296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.288408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.288436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.288612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.288642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.288913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.288964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.289136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.289167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.289333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.289360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.289482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.289511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.289717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-07-25 04:16:45.289747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-07-25 04:16:45.289907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.289937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.290124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.290154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.290357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.290386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.290556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.290587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.290776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.290806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.290973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.291004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.291197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.291227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.291410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.291437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.291592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.291619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.291769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.291799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.291991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.292022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.292221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.292266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.292438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.292466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.292620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.292648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.292837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.292868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.293068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.293098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.293277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.293306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.293483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.293527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.293692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.293882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.293913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.294103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.294133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.294314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.294342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.294464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.294491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.294664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.294694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.294886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.294917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.295080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.295137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.295321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.295350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.295529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.295558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.295707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.295735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.295934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.295980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.296155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.296184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.296365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.296411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.296610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.296655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.296817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.296863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.297004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.297034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.297204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.297232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.297416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-07-25 04:16:45.297461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-07-25 04:16:45.297638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.297858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.297911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.298040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.298186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.298216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.298402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.298448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.298612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.298643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.298879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.298928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.299081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.299109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.299255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.299283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.299452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.299497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.299634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.299681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.299881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.299927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.300102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.300291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.300322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.300543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.300588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.300780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.300809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.300961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.300989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.301118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.301145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.301309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.301357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.301512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.301541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.301736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.301903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.301932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.302080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.302109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.302285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.302314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.302518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.302564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.302738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.302782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.302953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.302982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.303126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.303154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.303355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.303401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.303572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.303620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.303792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.303843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.304016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.304044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.304220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.304255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.304405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.304451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.304621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.304667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.304863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.304908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-07-25 04:16:45.305062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-07-25 04:16:45.305092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.305216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.305257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.305465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.305496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.305659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.305689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.305835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.305865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.306023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.306053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.306224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.306260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.306413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.306441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.306605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.306634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.306800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.306830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.307027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.307058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.307219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.307257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.307427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.307456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.307626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.307846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.307877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.308009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.308052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.308250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.308297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.308471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.308498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.308658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.308686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.308811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.308839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.309037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.309234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.309424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.309604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.309801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.309987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.310158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.310384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.310527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.310676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.310893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.310923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.311064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.311094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.311257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.311302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.311421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-07-25 04:16:45.311448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-07-25 04:16:45.311622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.311649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.311780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.311810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.311978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.312168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.312337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.312708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.312893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.312923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.313970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.313999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.314159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.314189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.314341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.314369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.314560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.314590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.314857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.314909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.315066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.315096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.315267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.315311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.315443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.315470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.315604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.315632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.315807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.315839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.316039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.316070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.316251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.316278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.316424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.316451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-07-25 04:16:45.316603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-07-25 04:16:45.316630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.316806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.317948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.317975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.318155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.318186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.318367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.318395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.318548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.318794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.318824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.319014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.319041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.319204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.319234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.319398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.319425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.319576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.319603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.319771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.319817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.320916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.320943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.321121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.321305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.321333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.321505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.321549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.321706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.321919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.321946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.322111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.322141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.322277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.322308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.322455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.322483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.322632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.322675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.322827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.322858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.323058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.323084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.323257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.323288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.323461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.323489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.323634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.323661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-07-25 04:16:45.323788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-07-25 04:16:45.323815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.323963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.323992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.324145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.324175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.324320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.324349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.324497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.324541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.324704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.324731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.324841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.324885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.325955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.325982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.326129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.326156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.326302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.326332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.326503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.326547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.326716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.326747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.326912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.326944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.327136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.327167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.327356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.327387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.327557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.327584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.327726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.327753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.327891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.327921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.328099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.328130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.328312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.328339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.328465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.328492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.328668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.328696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.328861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.328892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.329053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.329083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.329281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.329309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.329507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.329537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.329696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.329727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.329928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.329955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.330116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.330146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.330276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-07-25 04:16:45.330308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-07-25 04:16:45.330472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.330499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.330672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.330716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.330886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.330916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.331084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.331112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.331239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.331291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.331468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.331495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.331648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.331676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.331852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.331879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.332025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.332068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.332255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.332299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.332458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.332486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.332685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.332715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.332884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.332912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.333036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.333064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.333223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.333261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.333414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.333441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.333637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.333667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.333827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.333856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.334945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.334980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.335173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.335202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.335375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.335403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.335579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.335609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.335811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.335838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.335978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.336007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.336198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.336228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.336414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.336441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.336552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.336595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.336753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.336783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.336977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.337004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.337182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.337211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.337393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.337420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.337541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.337569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-07-25 04:16:45.337748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-07-25 04:16:45.337778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.337943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.337972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.338191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.338221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.338396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.338424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.338574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.338604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.338768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.338795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.338917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.338945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.339088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.339115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.339256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.339301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.339421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.339448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.339695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.339727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.339962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.339992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.340226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.340285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.340442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.340475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.340601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.340629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.340752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.340779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.340953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.340981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.341161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.341189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.341359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.341390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.341527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.341557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.341719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.341746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.341900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.341927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.342077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.342104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.342280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.342308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.342454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.342497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.342681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.342711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.342905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.342932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.343108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.343139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.343299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.343329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.343504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.343531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.343706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.343733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.343901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.343931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.344095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.344122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.344252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.344281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.344435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.344479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-07-25 04:16:45.344649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-07-25 04:16:45.344676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.344842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.344871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.345876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.345903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.346078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.346281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.346499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.346667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.346858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.346986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.347041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.347214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.347253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 983973 Killed "${NVMF_APP[@]}" "$@" 00:33:30.356 [2024-07-25 04:16:45.347450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.347477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.347644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.347675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:30.356 [2024-07-25 04:16:45.347842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.347872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:30.356 [2024-07-25 04:16:45.348043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.348071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:30.356 [2024-07-25 04:16:45.348204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.348258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.348396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.356 [2024-07-25 04:16:45.348426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.348599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.348627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.348751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.348782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.348935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.348961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.349150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.349176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.349314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.349341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-07-25 04:16:45.349490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-07-25 04:16:45.349543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.349740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.349766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.349959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.349999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.350339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.350515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.350671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.350840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.350991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.351142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.351386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.351552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.351752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.351957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.351984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.352144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.352170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.352318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.352344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.352490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.352516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.352636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.352678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.352846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.352875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.353069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.353095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=984524 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:30.357 [2024-07-25 04:16:45.353268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.353305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 984524 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.353475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.353501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 984524 ']' 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.357 [2024-07-25 04:16:45.353654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.353680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.357 [2024-07-25 04:16:45.353854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.357 [2024-07-25 04:16:45.353880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.357 [2024-07-25 04:16:45.354058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.354087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.357 [2024-07-25 04:16:45.354284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.354310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.354495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.354524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.354688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.354716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.354880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.354907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.355055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.355082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.355258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.355289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.355466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-07-25 04:16:45.355495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-07-25 04:16:45.355667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.355695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.355868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.355897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.356904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.356933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.357100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.357126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.357255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.357281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.357456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.357482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.357628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.357655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.357806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.357849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.358056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.358209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.358386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.358604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.358803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.358987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.359183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.359212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.359410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.359437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.359621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.359854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.359883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.360023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.360050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.360198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.360250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.360493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.360522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.360722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.360748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.360919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.360949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.361111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.361140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.361373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.361400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.361562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.361591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.361776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.361803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.361975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.362001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.362124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.362151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.362313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.362340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-07-25 04:16:45.362504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-07-25 04:16:45.362535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.362703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.362732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.362892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.362921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.363088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.363116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.363261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.363299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.363450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.363477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.363623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.363649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.363845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.363875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.364066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.364095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.364290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.364437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.364464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.364638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.364665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.364817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.364844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.365042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.365072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.365277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.365308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.365448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.365474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.365638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.365668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.365855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.365885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.366958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.366986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.367170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.367196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.367389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.367419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.367598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.367625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.367855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.367881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.368063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.368284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.368455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.368677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.368838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.368987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.369014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.369141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.369168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.369327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.369355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.369506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.369534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-07-25 04:16:45.369708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-07-25 04:16:45.369735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.369881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.369907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.370899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.370926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.371913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.371940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.372121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.372301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.372458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.372631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.372839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.372985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.373165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.373351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.373524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.373700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.373902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.373928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.374929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.374956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.375133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.375162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.375282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.375309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.375439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.375466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.375615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.375641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.375816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.375843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-07-25 04:16:45.376015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-07-25 04:16:45.376042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.376190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.376217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.376480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.376507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.376663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.376689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.376868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.376894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.377970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.377996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.378145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.378171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.378317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.378344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.378496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.378521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.378661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.378689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.378863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.378891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.379064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.379089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.379269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.379296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.379474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.379501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.379653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.379679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.379804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.379830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.380830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.380856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-07-25 04:16:45.381882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-07-25 04:16:45.381909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.382923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.382949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.383094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.383121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.383282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.383310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.383458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.383484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.383637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.383664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.383809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.383837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.384913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.384939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.385921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.385947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.386123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.386338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.386520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.386673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.386875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.386998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.387025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.387276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.387303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.387450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.387481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.387634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.387661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.387814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.387841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.387989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-07-25 04:16:45.388015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-07-25 04:16:45.388271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.388310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.388462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.388489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.388610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.388638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.388795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.388822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.388971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.388998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.389178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.389204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.389367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.389395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.389539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.389566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.389718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.389744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.389877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.389903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.390921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.391970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.391996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.392179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.392205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.392356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.392383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.392563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.392590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.392723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.392749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.392924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.392950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.393927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.393953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.394106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.394133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.394262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.394289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.394422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.394448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-07-25 04:16:45.394598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-07-25 04:16:45.394625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.394748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.394779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.394936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.394962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.395135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.395162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.395313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.395341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.395585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.395612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.395761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.395787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.395934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.395960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.396105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.396306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.396482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.396663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.396838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.396988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.397165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.397348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.397495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.397640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.397842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.397868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.398951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.398977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.399117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.399143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.399296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.399323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.399473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.399500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.399644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.399673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.399821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.399847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.400024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.400199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.400224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.400363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.400390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.400545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-07-25 04:16:45.400571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-07-25 04:16:45.400609] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:30.364 [2024-07-25 04:16:45.400679] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.364 [2024-07-25 04:16:45.400721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.400746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.400884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.400908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.401895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.402959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.402986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.403164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.403190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.403368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.403395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.403516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.403543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.403690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.403717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.403869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.403896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.404941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.404967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.405119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.405146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.405307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.405486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.405513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.405690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.405717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.405845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.405871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.406865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.406891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.407039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-07-25 04:16:45.407065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-07-25 04:16:45.407179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.407205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.407379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.407407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.407556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.407582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.407763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.407790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.407940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.407967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.408142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.408169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.408344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.408372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.408520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.408547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.408698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.408725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.408867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.409907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.409933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.410136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.410312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.410476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.410647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.410830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.410978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.411155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.411310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.411484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.411692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.411892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.411918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.412094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.412121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.412272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.412300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.412450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-07-25 04:16:45.412476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-07-25 04:16:45.412608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.412635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.412809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.412835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.412971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.412997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.413179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.413328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.413481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.413637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.413837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.413994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.414193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.414379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.414560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.414715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.414894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.414920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.415067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.415093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.415338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.415365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.415537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.415564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.415736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.415763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.415909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.415936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.416106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.416284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.416476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.416654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.416993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.417145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.417317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.417487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.417665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.417838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.417864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.418930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.418957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.419072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-07-25 04:16:45.419099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-07-25 04:16:45.419249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.419277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.419426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.419452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.419629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.419656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.419842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.419868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.420881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.420907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.421912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.421939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.422959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.422985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.423159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.423184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.423340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.423367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.423547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.423573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.423723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.423753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.423902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.423929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.424076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.424102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.424231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.424265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.424420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.424447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.424690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.424717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.424956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.424983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.425128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.425154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.425311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.425338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.425464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.425491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.425611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.425637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-07-25 04:16:45.425810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-07-25 04:16:45.425836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.426904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.426930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.427049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.427077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.427255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.427283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.427433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.427460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.427635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.427661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.427838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.427865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.428908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.428935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.429117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.429321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.429489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.429631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.429984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.430136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.430346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.430494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.430664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.430870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.430897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.431940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.431967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.432092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.432118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.432298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.432325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.432499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-07-25 04:16:45.432525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-07-25 04:16:45.432646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.432672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.432825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.432851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.432976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.433149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.433317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.433519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.433674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.433943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.433970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.434140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.434167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.434321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.434348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.434477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.434658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.434684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.434831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.434859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.435906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.435933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.436942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.436969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.437147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.437323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.437494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.437674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.437846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.437998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.438024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.438135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.438161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.438341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.438368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-07-25 04:16:45.438552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-07-25 04:16:45.438579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.438731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.438757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.438913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.438939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.439898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.439924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.440130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.440291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.440434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.440636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.440803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.440980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.441136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.441306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.371 [2024-07-25 04:16:45.441482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.441663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.441863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.441889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.442933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.442959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.443085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.443111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.443266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.443295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.443472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.443498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.443647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.443674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.443847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.443874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.444021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.444048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.444201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.444230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.444396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.444424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.444581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-07-25 04:16:45.444607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-07-25 04:16:45.444778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.444804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.444939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.444966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.445116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.445144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.445298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.445325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.445342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:30.372 [2024-07-25 04:16:45.445483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.445509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.445691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.445719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.445868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.445894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.446883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.446910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.447908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.447934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.448091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.448236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.448508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.448688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.448828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.448980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.449186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.449371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.449541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.449740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.449921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.449948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.450130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.450311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.450463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.450658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-07-25 04:16:45.450808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-07-25 04:16:45.450987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.451896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.451923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.452040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.452067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.452231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.452265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.452451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.452478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.452652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.452679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.452860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.452887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.453958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.453984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.454155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.454335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.454491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.454676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.454855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.454974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.455147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.455340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.455542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.455742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.455919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.455945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.456124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.456151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.456284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.456314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.456470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.456497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.456674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.456700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.456849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.456875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.457002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.457030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-07-25 04:16:45.457175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-07-25 04:16:45.457202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.457331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.457359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.457489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.457516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.457674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.457700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.457828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.457856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.457976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.458150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.458322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.458503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.458650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.458820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.458846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.459828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.459980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.460182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.460332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.460515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.460725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.460901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.460929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.461966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.461993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.462167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.462329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.462500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.462672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.462850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.462998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.463025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.463210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.463237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.463374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.463401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-07-25 04:16:45.463547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-07-25 04:16:45.463573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.463704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.463730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.463856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.463884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.464928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.464955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.465135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.465289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.465468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.465645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.465975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.466839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.466991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.467880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.467993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.468020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.468196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.468222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.468356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.468383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-07-25 04:16:45.468545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-07-25 04:16:45.468572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.468722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.468748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.468899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.468926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.469099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.469304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.469486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.469671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.469840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.469993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.470851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.470978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.471149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.471332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.471507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.471712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.471902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.471928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.472953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.472979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.473182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.473336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.473523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.473724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.376 [2024-07-25 04:16:45.473846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.473872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.473997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.474024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.474203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.474230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.474390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.474417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.474561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.474587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.474728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-07-25 04:16:45.474755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-07-25 04:16:45.474896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.474923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.475084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.475111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.475260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.475288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.475439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.475466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.475626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.475653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.475829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.475855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.476032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.476236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.476458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.476638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.476822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.476975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.477944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.477971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.478181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.478209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.478372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.478399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.478551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.478578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.478728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.478754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.478905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.478932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.479089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.479119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.479273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.479300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.479453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.479479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.479656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.479682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.479838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.479865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.480015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.480042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.480189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.480215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.480347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.480374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.480548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-07-25 04:16:45.480575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-07-25 04:16:45.480722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.480748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.480882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.480908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.481931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.481958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.482118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.482144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.482288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.482315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.482470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.482497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.482658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.482685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.482865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.483095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.483262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.483438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.483619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.483800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.483975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.484180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.484379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.484536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.484709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.484852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.484878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.485868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.485894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.486925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.486951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.487103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-07-25 04:16:45.487130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-07-25 04:16:45.487283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.487311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.487452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.487478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.487628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.487654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.487806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.487833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.487969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.488136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.488163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.488314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.488342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.488467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.488494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.488648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.488676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.488831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.488859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.489893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.489921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.490959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.490987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.491120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.491150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.491270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.491297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.491441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.491468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.491622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.491649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.491813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.492951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.492978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.493130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.493157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.493309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.493338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-07-25 04:16:45.493460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-07-25 04:16:45.493487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.493659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.493686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.493839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.493866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.493991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.494825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.494983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.495156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.495341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.495524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.495699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.495874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.495906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.496083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.496111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.496257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.496285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.496462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.496489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.496667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.496844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.496872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.497910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.497938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-07-25 04:16:45.498964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-07-25 04:16:45.498994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.499182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.499210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.499368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.499396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.499540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.499569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.499724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.499752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.499890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.499918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.500254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.500423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.500806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.500983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.501166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.501322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.501512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.501675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.501885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.501913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.502970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.502998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.503175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.503203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.503365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.503393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.503559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.503588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.503714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.503742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.503922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.504075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.504104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.504259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.504288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.504474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.504503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.504655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.504827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.504855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-07-25 04:16:45.505003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-07-25 04:16:45.505032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.505182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.505210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.505392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.505422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.505578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.505607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.505790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.505823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.505946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.505974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.506121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.506150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.506331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.506359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.506511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.506540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.506693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.506722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.506877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.506906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.507964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.507992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.508149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.508179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.508363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.508392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.508544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.508573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.508723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.508752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.508879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.508908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.509881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.509910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.510935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.510963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.511113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.511142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.511296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.511481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-07-25 04:16:45.511510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-07-25 04:16:45.511687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.511716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.511833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.511862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.512942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.512974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.513139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.513179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.513336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.513366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.513514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.513543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.513690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.513718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.513895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.513923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.514957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.514986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.515162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.515191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.515369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.515411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.515552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.515582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.515729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.515758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.515879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.515907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.516935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.516962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.517139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.517166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.517299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.517328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.517506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.517533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.517650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.517677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.517830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.517861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.518038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.518064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.518274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-07-25 04:16:45.518302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-07-25 04:16:45.518478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.518505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.518617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.518644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.518797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.518823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.518979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.519184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.519377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.519557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.519737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.519916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.519943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.520126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.520153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.520283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.520324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.520515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.520544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.520675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.520703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.520866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.520894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.521886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.521912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.522940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.522968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.523113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.523140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.523293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.523322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.523474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.523661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.523688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.523868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.523894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.524022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.524048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-07-25 04:16:45.524237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-07-25 04:16:45.524271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.524450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.524476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.524592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.524619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.524749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.524777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.524898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.524925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.525126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.525302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.525484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.525667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.525816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.525991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.526823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.526998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.527173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.527328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.527501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.527679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.527856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.527883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.528870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.529048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.529075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.529220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.529263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.529455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.529482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-07-25 04:16:45.529628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-07-25 04:16:45.529655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.529776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.529804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.529960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.529994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.530154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.530181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.530311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.530339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.530495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.530521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.530680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.530707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.530852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.530879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.531928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.531955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.532968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.532996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.533166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.533353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.533507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.533680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.533841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.533987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.534015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.534196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.534224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.534416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.534444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.534651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.534694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.534852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.534892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.535044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.535220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.535257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.535382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.535411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.535597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.535625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.535809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.535838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.536002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-07-25 04:16:45.536044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-07-25 04:16:45.536207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.536236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.536393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.536422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.536567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.536595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.536720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.536748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.536924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.536953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.537947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.537975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.538119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.538148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.538332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.538362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.538491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.538658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.538686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.538863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.538891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.539868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.539897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.540928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.540956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.541114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.541142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.541297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.541326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.541477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.541505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.541660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.541689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.541841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.541869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.542017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.542049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.542201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.542229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.542363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.542392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-07-25 04:16:45.542520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-07-25 04:16:45.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.542697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.542725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.542899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.542926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.543939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.543966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.544114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.544141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.544320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.544349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.544508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.544536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.544683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.544713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.544893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.544921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.545956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.545984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.546134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.546320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.546473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.546652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.546840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.546990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.547158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.547350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.547538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.547746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.547931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.547961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.548113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.548141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.548296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.548325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.548459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.548487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.548668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.548697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-07-25 04:16:45.548845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-07-25 04:16:45.548874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.549930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.549957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.550923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.550951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.551952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.551980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.552130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.552158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.552297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.552338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.552505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.552536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.552687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.552715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.552868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.552897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.553907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.553940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.554136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.554312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.554491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.554672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.554826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.554976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-07-25 04:16:45.555005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-07-25 04:16:45.555128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.555157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.555306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.555335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.555535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.555733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.555761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.555914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.555942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.556106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.556296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.556485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.556660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.556999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.557182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.557395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.557546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.557699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.557875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.557902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.558919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.558947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.559107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.559316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.559492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.559650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.559828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.559977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.560005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.560181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.560209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-07-25 04:16:45.560394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-07-25 04:16:45.560421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.560571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.560597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.560754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.560781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.560926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.560952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.561905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.561931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.562879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.562905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.563083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.563110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.563299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.563327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.563485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.563512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.563711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.563739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.563868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.563895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.564047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.564266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.564455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.564631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.391 [2024-07-25 04:16:45.564711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.391 [2024-07-25 04:16:45.564726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.391 [2024-07-25 04:16:45.564739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.391 [2024-07-25 04:16:45.564750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.391 [2024-07-25 04:16:45.564753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.564780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:30.391 [2024-07-25 04:16:45.564935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.391 [2024-07-25 04:16:45.564921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.564973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:30.391 [2024-07-25 04:16:45.564980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:30.391 [2024-07-25 04:16:45.565217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.565400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.565554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.565706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.565908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.565934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.566923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-07-25 04:16:45.566949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-07-25 04:16:45.567071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.567249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.567400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.567556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.567704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.567852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.567878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.568861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.568888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.569893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.569920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.570914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.570943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.571969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.571996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.572947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.572974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.573129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.573157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.573297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.573326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.573503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.573530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.573681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-07-25 04:16:45.573709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-07-25 04:16:45.573862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.573890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.574860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.574887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.575933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.575959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.576893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.576920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.577873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.577900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.578923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.578951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.579110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.579270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.579424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.579787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.579975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.580002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.580115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.580142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-07-25 04:16:45.580262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-07-25 04:16:45.580290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.580405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.580432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.580550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.580577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.580720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.580747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.580898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.580924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.581915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.581943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.582906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.582935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.583878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.583906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.584145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.584311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.584460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.584669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.584851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.584983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.585866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.585986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.586835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.586977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.587007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.587144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.587184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.587314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.587343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.587476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-07-25 04:16:45.587503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-07-25 04:16:45.587627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.587656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.587804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.587831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.587960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.587988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.588925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.588952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.589853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.589880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.590835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.590863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.591831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.591858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.592008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.592034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-07-25 04:16:45.592163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-07-25 04:16:45.592189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.592365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.592393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.592523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.592550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.592666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.592693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.592816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.592842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.592976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.593131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.593345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.593533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.593686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.593869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.593897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.594840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.594984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.595138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.595291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.595474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.595692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.595869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.595897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.596913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.596940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.597850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.597877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.598968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.598996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.599968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.599995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.600942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.600969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.601087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.601118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.601259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.601287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.601437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.601464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-07-25 04:16:45.601598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-07-25 04:16:45.601639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.601798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.601827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.601956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.601983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.602963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.602990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.603936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.603962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.604882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.604909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.605874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.605990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.606173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.606353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.606530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.606691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.606851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.606878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.607003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.607032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.607152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.607179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.607309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.607338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.607459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-07-25 04:16:45.607486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-07-25 04:16:45.607640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.607667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.607796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.607825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.607958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.607986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.608122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.608150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.608361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.608388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.608513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.608541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.608668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.608696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.608851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.609881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.609908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.610859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.610886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.611852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.611878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.612841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.612870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.613068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.613146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.613313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.613342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.613528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.613556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.613676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.613704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.613866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.613894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.614044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.398 [2024-07-25 04:16:45.614072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.398 qpair failed and we were unable to recover it. 00:33:30.398 [2024-07-25 04:16:45.614226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.614261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.614385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.614414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.614536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.614564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.614718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.614751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.614934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.615939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.615966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.399 [2024-07-25 04:16:45.616954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.399 [2024-07-25 04:16:45.616983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.399 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.617960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.617987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.618946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.618974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.619920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.619947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.620125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.620296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.620470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.620646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.620831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.620981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.621867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.621990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.670 [2024-07-25 04:16:45.622017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.670 qpair failed and we were unable to recover it. 00:33:30.670 [2024-07-25 04:16:45.622164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.622192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.622341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.622499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.622540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.622696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.622725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.622874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.622901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.623884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.623910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.624967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.624993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.625123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.625150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.625283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.625311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.625427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.625453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.625640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.625675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.625856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.625892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.626966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.626992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.627965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.627992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.628129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.628156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.628280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.628309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.628436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.628463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.671 [2024-07-25 04:16:45.628586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.671 [2024-07-25 04:16:45.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.671 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.628786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.628813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.628935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.628961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.629106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.629134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.629283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.629324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.629464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.629504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.629668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.629698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.629874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.629902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.630829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.630862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.631867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.631998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.632180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.632338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.632495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.632663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.632825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.632852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.633848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.633875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.672 [2024-07-25 04:16:45.634811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.672 qpair failed and we were unable to recover it. 00:33:30.672 [2024-07-25 04:16:45.634936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.634966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.635968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.635995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.636916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.636943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.637088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.637115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.637226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.637261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.637449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.637477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.637632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.637659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.637809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.637836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.638922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.638950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.639882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.639909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.640878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.640905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.641019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.641047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.673 [2024-07-25 04:16:45.641190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.673 [2024-07-25 04:16:45.641218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.673 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.641371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.641399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.641586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.641613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.641769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.641797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.641939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.641967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.642184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.642211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.642380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.642409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.642538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.642565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.642691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.642719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.642835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.642862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.643859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.643886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.644852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.644997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.645968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.645996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.646116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.646143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.646290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.646318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.646438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.674 [2024-07-25 04:16:45.646466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.674 qpair failed and we were unable to recover it. 00:33:30.674 [2024-07-25 04:16:45.646617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.646644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.646786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.646818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.646974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.647875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.647902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.648962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.648990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.649924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.649951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.650960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.650989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.651163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.651347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.651524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.651697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.651872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.651986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.652014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.652137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.652164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.652308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.652336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.652462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.652490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.652611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.675 [2024-07-25 04:16:45.652638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.675 qpair failed and we were unable to recover it. 00:33:30.675 [2024-07-25 04:16:45.652766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.652794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.652940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.652967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.653910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.653936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.654870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.654999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.655883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.655911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.656861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.656978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.657005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.657122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.657149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.657267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.657295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.657443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.657470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.676 [2024-07-25 04:16:45.657645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.676 [2024-07-25 04:16:45.657679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.676 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.657790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.657817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.657969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.657997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.658960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.658987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.659894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.659922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.660859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.660976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.661154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.661333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.661484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.661662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.661866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.661894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.662879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.662907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.663059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.663253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.663412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.663564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.677 [2024-07-25 04:16:45.663718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.677 qpair failed and we were unable to recover it. 00:33:30.677 [2024-07-25 04:16:45.663871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.663898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.664897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.664925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.665869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.665896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.666899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.666926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.667889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.667916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.668838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.668867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.669936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.678 [2024-07-25 04:16:45.669963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.678 qpair failed and we were unable to recover it. 00:33:30.678 [2024-07-25 04:16:45.670135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.670273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.670458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.670631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.670778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.670948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.670975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.671189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.671362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.671542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.671686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.671864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.671987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.672930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.672957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.673888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.673916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.674942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.674969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.675144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.675171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.675290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.679 [2024-07-25 04:16:45.675318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.679 qpair failed and we were unable to recover it. 00:33:30.679 [2024-07-25 04:16:45.675470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.675497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.675655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.675682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.675803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.675830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.675949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.675976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.676879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.676906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.677967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.677994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.678920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.679906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.679937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.680853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.680880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.681029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.681177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.681368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.681396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.680 qpair failed and we were unable to recover it. 00:33:30.680 [2024-07-25 04:16:45.681532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.680 [2024-07-25 04:16:45.681559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.681672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.681699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.681873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.681901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.682870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.682897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.683919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.683946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.684863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.685929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.685956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.686908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.686935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.687055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.687082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.687231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.687267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.687384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.687412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.687539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.687567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.681 [2024-07-25 04:16:45.687684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.681 [2024-07-25 04:16:45.687713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.681 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.687832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.687859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.688905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.688932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.689922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.689948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.690099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.690276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.690625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.690826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.690965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.691862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.691993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.692930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.692957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.693127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.693287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.693457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.693616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.682 [2024-07-25 04:16:45.693772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.682 qpair failed and we were unable to recover it. 00:33:30.682 [2024-07-25 04:16:45.693888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.693915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.694971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.694998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.695921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.695948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.696893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.696922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.697866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.697979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.698935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.698962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.699142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.699295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.699443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.699591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-07-25 04:16:45.699742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-07-25 04:16:45.699858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.699886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.699997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.700961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.700988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.701102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.701129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.701257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.701286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.684 [2024-07-25 04:16:45.701438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.701466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:30.684 [2024-07-25 04:16:45.701594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.701621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:30.684 [2024-07-25 04:16:45.701742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.701770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.684 [2024-07-25 04:16:45.701917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.684 [2024-07-25 04:16:45.701945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.702849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.702998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.703894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.703920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.704038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.704063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.704210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.704237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.704370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.704396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.704505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-07-25 04:16:45.704531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-07-25 04:16:45.704674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.704701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.704841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.704867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.704997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.705862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.705977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.706898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.706925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.707895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.707922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.708914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.708939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.709878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.709904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-07-25 04:16:45.710844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-07-25 04:16:45.710870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.710988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.711928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.711954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.712897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.712922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.713903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.713929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.714834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.714860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.715946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.715973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-07-25 04:16:45.716866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-07-25 04:16:45.716893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.717967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.717993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.718901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.718927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.719970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.719996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.720175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.720329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.720533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.720677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.720833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.720996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.721939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.721965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-07-25 04:16:45.722883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-07-25 04:16:45.722908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.723849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.723875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.724928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.724954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.725966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.725993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.726137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.726334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.726473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.688 [2024-07-25 04:16:45.726664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.726847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.726965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.726991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.688 [2024-07-25 04:16:45.727173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.727203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.688 [2024-07-25 04:16:45.727364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.727392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.688 [2024-07-25 04:16:45.727535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.727568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.727689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.727716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.727848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.727874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-07-25 04:16:45.728006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-07-25 04:16:45.728031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.728914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.729856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.729980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.730840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.730990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.731940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.731966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.732875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.732900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.733900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.733926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-07-25 04:16:45.734046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-07-25 04:16:45.734072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.734259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.734286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.734412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.734438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.734588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.734614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.734729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.734881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.734907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.735872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.735913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.736912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.736939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.737883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.737908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.738955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.738981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.739957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.739983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-07-25 04:16:45.740100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-07-25 04:16:45.740126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.740263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.740289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.740444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.740471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.740599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.740624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.740747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.740772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.740903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.740929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.741872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.741898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.742871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.742897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.743841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.743981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.744213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.744240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.744416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.744441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.744646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.744672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.744800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.744826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.744980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.745960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.745985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.746107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.746133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.746264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.746290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-07-25 04:16:45.746471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-07-25 04:16:45.746496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.746620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.746646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.746796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.746822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.746949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.746975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.747861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.747887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.748875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.748902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.749841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.749867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.750878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.750903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.751853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.751880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.752029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.752056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.752187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.752359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-07-25 04:16:45.752387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-07-25 04:16:45.752546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.752573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.752724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.752750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.752937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.752964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.753929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.753955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.754100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.754261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.754409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.754614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 Malloc0 00:33:30.693 [2024-07-25 04:16:45.754766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.754922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.754949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.693 [2024-07-25 04:16:45.755116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:30.693 [2024-07-25 04:16:45.755295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.693 [2024-07-25 04:16:45.755454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.693 [2024-07-25 04:16:45.755612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.755763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.755914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.755941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.756864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.756890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.757009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.757035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.757157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.757184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.757309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.757336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-07-25 04:16:45.757498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-07-25 04:16:45.757524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.757642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.757671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.757817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.757843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.757992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.694 [2024-07-25 04:16:45.758340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.758966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.758992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.759904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.759930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.760965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.760991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.761856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.761882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fc4b0 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.762969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.762996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.763119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.763145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.763272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.763307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.763458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.763483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-07-25 04:16:45.763605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-07-25 04:16:45.763630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.763753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.763779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.763900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.763925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.764863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.764889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.765947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.765973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.766098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.766123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.766251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.766395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.766422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.766545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.766571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.766690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:30.695 [2024-07-25 04:16:45.766716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.695 [2024-07-25 04:16:45.766858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.766884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.695 [2024-07-25 04:16:45.767014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.767857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.767995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.768934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.768960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.769081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.769107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-07-25 04:16:45.769260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-07-25 04:16:45.769287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.769416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.769442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.769564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.769590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.769735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.769763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.769912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.769938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.770109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.770304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.770476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.770676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.770821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.770977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.771930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.771955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.772912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.772939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.773904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.773930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.774048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.774073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.774224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.774255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5410000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.774393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.774432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.774563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.696 [2024-07-25 04:16:45.774596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.696 [2024-07-25 04:16:45.774742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.774769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.696 [2024-07-25 04:16:45.774889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.774916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.696 [2024-07-25 04:16:45.775044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.775071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-07-25 04:16:45.775204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-07-25 04:16:45.775231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.775394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.775421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.775547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.775573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.775708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.775735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.775868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.775894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.776854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.776880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.777957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.777998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.778213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.778385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.778541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.778717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.778864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.778990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.779166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.779357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.779513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.779668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.779833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.779861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5400000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.780855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.780887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-07-25 04:16:45.781039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.781065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5408000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 A controller has encountered a failure and is being reset. 00:33:30.697 [2024-07-25 04:16:45.781282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-07-25 04:16:45.781318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60a470 with addr=10.0.0.2, port=4420 00:33:30.697 [2024-07-25 04:16:45.781338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60a470 is same with the state(5) to be set 00:33:30.697 [2024-07-25 04:16:45.781366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60a470 (9): Bad file descriptor 00:33:30.697 [2024-07-25 04:16:45.781394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.698 [2024-07-25 04:16:45.781410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.698 [2024-07-25 04:16:45.781427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.698 Unable to reset the controller. 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.698 [2024-07-25 04:16:45.786577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.698 04:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 983997 00:33:31.630 qpair failed and we were unable to recover it. 00:33:31.630 Controller properly reset. 00:33:36.886 Initializing NVMe Controllers 00:33:36.886 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:36.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:36.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:36.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:36.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:36.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:36.886 Initialization complete. Launching workers. 00:33:36.886 Starting thread on core 1 00:33:36.886 Starting thread on core 2 00:33:36.886 Starting thread on core 3 00:33:36.886 Starting thread on core 0 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:36.886 00:33:36.886 real 0m10.824s 00:33:36.886 user 0m32.113s 00:33:36.886 sys 0m8.127s 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.886 ************************************ 00:33:36.886 END TEST nvmf_target_disconnect_tc2 00:33:36.886 ************************************ 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:36.886 rmmod nvme_tcp 00:33:36.886 rmmod nvme_fabrics 00:33:36.886 rmmod nvme_keyring 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 984524 ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 984524 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 984524 ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 984524 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 984524 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 984524' 00:33:36.886 killing process with pid 984524 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 984524 00:33:36.886 04:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 984524 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.886 04:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:39.414 00:33:39.414 real 0m15.588s 00:33:39.414 user 0m57.899s 00:33:39.414 sys 0m10.603s 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 ************************************ 00:33:39.414 END TEST nvmf_target_disconnect 00:33:39.414 ************************************ 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:39.414 00:33:39.414 real 6m31.989s 00:33:39.414 user 17m0.124s 00:33:39.414 sys 1m28.008s 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.414 04:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 ************************************ 00:33:39.414 END TEST nvmf_host 00:33:39.414 ************************************ 00:33:39.414 00:33:39.414 real 27m9.487s 00:33:39.414 user 74m15.529s 00:33:39.414 sys 6m27.749s 00:33:39.414 04:16:54 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.414 04:16:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 ************************************ 00:33:39.414 END TEST nvmf_tcp 00:33:39.414 ************************************ 00:33:39.414 04:16:54 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:33:39.414 04:16:54 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:39.414 04:16:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:39.414 04:16:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.414 04:16:54 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 ************************************ 00:33:39.414 START TEST spdkcli_nvmf_tcp 00:33:39.414 ************************************ 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:39.414 * Looking for test storage... 00:33:39.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=985722 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 985722 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 985722 ']' 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 [2024-07-25 04:16:54.327683] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:33:39.414 [2024-07-25 04:16:54.327760] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985722 ] 00:33:39.414 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.414 [2024-07-25 04:16:54.359650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:39.414 [2024-07-25 04:16:54.390775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:39.414 [2024-07-25 04:16:54.487268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.414 [2024-07-25 04:16:54.487280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:39.414 04:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.415 04:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:39.415 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:39.415 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:39.415 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:39.415 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:39.415 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:39.415 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:39.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:39.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:39.415 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:39.415 ' 00:33:41.943 [2024-07-25 04:16:57.185619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.315 [2024-07-25 04:16:58.409950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:45.840 [2024-07-25 04:17:00.701346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:47.736 [2024-07-25 04:17:02.663639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:49.104 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:49.104 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:49.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:49.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:49.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:49.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:49.105 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:49.105 04:17:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.668 04:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:49.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:49.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:49.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:49.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:49.668 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:49.668 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:49.668 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:49.668 ' 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:54.922 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:54.922 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:54.922 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 985722 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 985722 ']' 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 985722 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 985722 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:54.922 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:54.923 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 985722' 00:33:54.923 killing process with pid 985722 00:33:54.923 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 985722 00:33:54.923 04:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 985722 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 985722 ']' 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 985722 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 985722 ']' 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 985722 00:33:54.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (985722) - No such process 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 985722 is not found' 00:33:54.923 Process with pid 985722 is not found 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:54.923 00:33:54.923 real 0m15.995s 00:33:54.923 user 0m33.840s 00:33:54.923 sys 0m0.810s 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.923 04:17:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.923 ************************************ 00:33:54.923 END TEST spdkcli_nvmf_tcp 00:33:54.923 ************************************ 00:33:55.180 04:17:10 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:55.181 04:17:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:55.181 04:17:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:55.181 04:17:10 -- common/autotest_common.sh@10 -- # set +x 00:33:55.181 ************************************ 00:33:55.181 START TEST nvmf_identify_passthru 00:33:55.181 ************************************ 00:33:55.181 04:17:10 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:55.181 * Looking for test storage... 00:33:55.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:55.181 04:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:55.181 04:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:55.181 04:17:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.181 04:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.181 04:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:55.181 04:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:55.181 04:17:10 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:55.181 04:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:57.079 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:57.080 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:57.080 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:57.080 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:57.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:57.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:33:57.080 00:33:57.080 --- 10.0.0.2 ping statistics --- 00:33:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.080 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:33:57.080 00:33:57.080 --- 10.0.0.1 ping statistics --- 00:33:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.080 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:57.080 04:17:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:57.080 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.080 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:57.080 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:57.338 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:57.338 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:57.338 04:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:57.338 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:57.338 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:57.338 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:57.338 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:57.338 04:17:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:57.338 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.572 04:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:01.572 04:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:01.572 04:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:01.572 04:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:01.572 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=990214 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.754 04:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 990214 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 990214 ']' 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:05.754 04:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:05.754 [2024-07-25 04:17:20.882406] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:34:05.754 [2024-07-25 04:17:20.882496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.754 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.754 [2024-07-25 04:17:20.921255] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:05.754 [2024-07-25 04:17:20.947855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:05.754 [2024-07-25 04:17:21.037360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.754 [2024-07-25 04:17:21.037422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.754 [2024-07-25 04:17:21.037436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.754 [2024-07-25 04:17:21.037448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.754 [2024-07-25 04:17:21.037459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.754 [2024-07-25 04:17:21.037522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.754 [2024-07-25 04:17:21.037584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.754 [2024-07-25 04:17:21.037630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.754 [2024-07-25 04:17:21.037633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:06.012 04:17:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.012 INFO: Log level set to 20 00:34:06.012 INFO: Requests: 00:34:06.012 { 00:34:06.012 "jsonrpc": "2.0", 00:34:06.012 "method": "nvmf_set_config", 00:34:06.012 "id": 1, 00:34:06.012 "params": { 00:34:06.012 "admin_cmd_passthru": { 00:34:06.012 "identify_ctrlr": true 00:34:06.012 } 00:34:06.012 } 00:34:06.012 } 00:34:06.012 00:34:06.012 INFO: response: 00:34:06.012 { 00:34:06.012 "jsonrpc": "2.0", 00:34:06.012 "id": 1, 00:34:06.012 "result": true 00:34:06.012 } 00:34:06.012 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.012 04:17:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.012 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.012 INFO: Setting log level to 20 00:34:06.012 INFO: Setting log level to 20 00:34:06.012 INFO: Log level set to 20 00:34:06.012 INFO: Log level set to 20 00:34:06.012 INFO: Requests: 00:34:06.012 { 00:34:06.012 "jsonrpc": "2.0", 00:34:06.012 "method": "framework_start_init", 00:34:06.012 "id": 1 00:34:06.012 } 00:34:06.012 00:34:06.012 INFO: Requests: 00:34:06.012 { 00:34:06.012 "jsonrpc": "2.0", 00:34:06.012 "method": "framework_start_init", 00:34:06.012 "id": 1 00:34:06.012 } 00:34:06.012 00:34:06.012 [2024-07-25 04:17:21.213602] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:06.012 INFO: response: 00:34:06.012 { 00:34:06.012 "jsonrpc": "2.0", 00:34:06.012 "id": 1, 00:34:06.012 "result": true 00:34:06.012 } 00:34:06.012 00:34:06.012 INFO: response: 00:34:06.013 { 00:34:06.013 "jsonrpc": "2.0", 00:34:06.013 "id": 1, 00:34:06.013 "result": true 00:34:06.013 } 00:34:06.013 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.013 04:17:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.013 INFO: Setting log level to 40 00:34:06.013 INFO: Setting log level to 40 00:34:06.013 INFO: Setting log level to 40 00:34:06.013 [2024-07-25 04:17:21.223741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.013 04:17:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.013 04:17:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.013 04:17:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 Nvme0n1 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 [2024-07-25 04:17:24.120554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 [ 00:34:09.290 { 00:34:09.290 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:09.290 "subtype": "Discovery", 00:34:09.290 "listen_addresses": [], 00:34:09.290 "allow_any_host": true, 00:34:09.290 "hosts": [] 00:34:09.290 }, 00:34:09.290 { 00:34:09.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.290 "subtype": "NVMe", 00:34:09.290 "listen_addresses": [ 00:34:09.290 { 00:34:09.290 "trtype": "TCP", 00:34:09.290 "adrfam": "IPv4", 00:34:09.290 "traddr": "10.0.0.2", 00:34:09.290 "trsvcid": "4420" 00:34:09.290 } 00:34:09.290 ], 00:34:09.290 "allow_any_host": true, 00:34:09.290 "hosts": [], 00:34:09.290 "serial_number": "SPDK00000000000001", 00:34:09.290 "model_number": "SPDK bdev Controller", 00:34:09.290 "max_namespaces": 1, 00:34:09.290 "min_cntlid": 1, 00:34:09.290 "max_cntlid": 65519, 00:34:09.290 "namespaces": [ 00:34:09.290 { 00:34:09.290 "nsid": 1, 00:34:09.290 "bdev_name": "Nvme0n1", 00:34:09.290 "name": "Nvme0n1", 00:34:09.290 "nguid": "8CD8A7AE352044D0BDEA81E500C13C1A", 00:34:09.290 "uuid": "8cd8a7ae-3520-44d0-bdea-81e500c13c1a" 00:34:09.290 } 00:34:09.290 ] 00:34:09.290 } 00:34:09.290 ] 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:09.290 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:09.290 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.290 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:09.290 04:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:09.290 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:09.291 rmmod nvme_tcp 00:34:09.291 rmmod nvme_fabrics 00:34:09.291 rmmod nvme_keyring 00:34:09.291 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:09.291 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:09.291 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:09.291 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 990214 ']' 00:34:09.291 04:17:24 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 990214 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 990214 ']' 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 990214 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 990214 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 990214' 00:34:09.291 killing process with pid 990214 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 990214 00:34:09.291 04:17:24 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 990214 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:11.186 04:17:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.186 04:17:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.186 04:17:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.083 04:17:28 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:13.083 00:34:13.083 real 0m17.888s 00:34:13.083 user 0m26.652s 00:34:13.083 sys 0m2.222s 00:34:13.083 04:17:28 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.083 04:17:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.083 ************************************ 00:34:13.083 END TEST nvmf_identify_passthru 00:34:13.083 ************************************ 00:34:13.083 04:17:28 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.084 04:17:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:13.084 04:17:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.084 04:17:28 -- common/autotest_common.sh@10 -- # set +x 00:34:13.084 ************************************ 00:34:13.084 START TEST nvmf_dif 00:34:13.084 ************************************ 00:34:13.084 04:17:28 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.084 * Looking for test storage... 00:34:13.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.084 04:17:28 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.084 04:17:28 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.084 04:17:28 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.084 04:17:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.084 04:17:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.084 04:17:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.084 04:17:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:13.084 04:17:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:13.084 04:17:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.084 04:17:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.084 04:17:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.084 04:17:28 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.084 04:17:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:14.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:14.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:14.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:14.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.985 04:17:30 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.243 04:17:30 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.243 04:17:30 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:34:15.243 00:34:15.243 --- 10.0.0.2 ping statistics --- 00:34:15.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.243 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:34:15.243 04:17:30 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:34:15.243 00:34:15.243 --- 10.0.0.1 ping statistics --- 00:34:15.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.243 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:34:15.243 04:17:30 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.243 04:17:30 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:15.244 04:17:30 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:15.244 04:17:30 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:16.175 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:16.175 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:16.175 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:16.175 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:16.175 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:16.175 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:16.175 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:16.175 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:16.175 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:16.175 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:16.175 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:16.175 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:16.175 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:16.175 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:16.175 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:16.176 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:16.176 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.176 04:17:31 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.433 04:17:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:16.433 04:17:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:16.433 04:17:31 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.433 04:17:31 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=993355 00:34:16.433 04:17:31 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:16.433 04:17:31 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 993355 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 993355 ']' 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:16.433 04:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.433 [2024-07-25 04:17:31.519842] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:34:16.433 [2024-07-25 04:17:31.519914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.433 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.433 [2024-07-25 04:17:31.556793] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:16.433 [2024-07-25 04:17:31.583456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.433 [2024-07-25 04:17:31.669988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.433 [2024-07-25 04:17:31.670042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.433 [2024-07-25 04:17:31.670066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.433 [2024-07-25 04:17:31.670076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.433 [2024-07-25 04:17:31.670085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.433 [2024-07-25 04:17:31.670109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:16.691 04:17:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 04:17:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.691 04:17:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:16.691 04:17:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 [2024-07-25 04:17:31.810068] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.691 04:17:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 ************************************ 00:34:16.691 START TEST fio_dif_1_default 00:34:16.691 ************************************ 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 bdev_null0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:16.691 [2024-07-25 04:17:31.866396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.691 { 00:34:16.691 "params": { 00:34:16.691 "name": "Nvme$subsystem", 00:34:16.691 "trtype": "$TEST_TRANSPORT", 00:34:16.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.691 "adrfam": "ipv4", 00:34:16.691 "trsvcid": "$NVMF_PORT", 00:34:16.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.691 "hdgst": ${hdgst:-false}, 00:34:16.691 "ddgst": ${ddgst:-false} 00:34:16.691 }, 00:34:16.691 "method": "bdev_nvme_attach_controller" 00:34:16.691 } 00:34:16.691 EOF 00:34:16.691 )") 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:16.691 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:16.692 "params": { 00:34:16.692 "name": "Nvme0", 00:34:16.692 "trtype": "tcp", 00:34:16.692 "traddr": "10.0.0.2", 00:34:16.692 "adrfam": "ipv4", 00:34:16.692 "trsvcid": "4420", 00:34:16.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.692 "hdgst": false, 00:34:16.692 "ddgst": false 00:34:16.692 }, 00:34:16.692 "method": "bdev_nvme_attach_controller" 00:34:16.692 }' 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:16.692 04:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.949 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:16.949 fio-3.35 00:34:16.949 Starting 1 thread 00:34:16.949 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.167 00:34:29.167 filename0: (groupid=0, jobs=1): err= 0: pid=993580: Thu Jul 25 04:17:42 2024 00:34:29.167 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:34:29.167 slat (nsec): min=4636, max=46390, avg=9354.94, stdev=2741.98 00:34:29.167 clat (usec): min=705, max=48199, avg=21072.82, stdev=20150.56 00:34:29.167 lat (usec): min=713, max=48216, avg=21082.17, stdev=20150.59 00:34:29.167 clat percentiles (usec): 00:34:29.167 | 1.00th=[ 758], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 840], 00:34:29.167 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:34:29.167 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:29.167 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:29.167 | 99.99th=[47973] 00:34:29.167 bw ( KiB/s): min= 672, max= 768, per=99.71%, avg=756.80, stdev=28.00, samples=20 00:34:29.167 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:34:29.167 lat (usec) : 750=0.84%, 1000=48.89% 00:34:29.167 lat (msec) : 2=0.05%, 50=50.21% 00:34:29.167 cpu : usr=90.00%, sys=9.74%, ctx=11, majf=0, minf=231 00:34:29.167 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.167 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.167 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:29.167 00:34:29.167 Run status group 0 (all jobs): 00:34:29.167 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10003-10003msec 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 00:34:29.167 real 0m11.047s 00:34:29.167 user 0m10.196s 00:34:29.167 sys 0m1.243s 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 ************************************ 00:34:29.167 END TEST fio_dif_1_default 00:34:29.167 ************************************ 00:34:29.167 04:17:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:29.167 04:17:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:29.167 04:17:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 ************************************ 00:34:29.167 START TEST fio_dif_1_multi_subsystems 00:34:29.167 ************************************ 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 bdev_null0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 [2024-07-25 04:17:42.969954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 bdev_null1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.167 04:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.167 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.168 { 00:34:29.168 "params": { 00:34:29.168 "name": "Nvme$subsystem", 00:34:29.168 "trtype": "$TEST_TRANSPORT", 00:34:29.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.168 "adrfam": "ipv4", 00:34:29.168 "trsvcid": "$NVMF_PORT", 00:34:29.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.168 "hdgst": ${hdgst:-false}, 00:34:29.168 "ddgst": ${ddgst:-false} 00:34:29.168 }, 00:34:29.168 "method": "bdev_nvme_attach_controller" 00:34:29.168 } 00:34:29.168 EOF 00:34:29.168 )") 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.168 { 00:34:29.168 "params": { 00:34:29.168 "name": "Nvme$subsystem", 00:34:29.168 "trtype": "$TEST_TRANSPORT", 00:34:29.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.168 "adrfam": "ipv4", 00:34:29.168 "trsvcid": "$NVMF_PORT", 00:34:29.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.168 "hdgst": ${hdgst:-false}, 00:34:29.168 "ddgst": ${ddgst:-false} 00:34:29.168 }, 00:34:29.168 "method": "bdev_nvme_attach_controller" 00:34:29.168 } 00:34:29.168 EOF 00:34:29.168 )") 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.168 "params": { 00:34:29.168 "name": "Nvme0", 00:34:29.168 "trtype": "tcp", 00:34:29.168 "traddr": "10.0.0.2", 00:34:29.168 "adrfam": "ipv4", 00:34:29.168 "trsvcid": "4420", 00:34:29.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.168 "hdgst": false, 00:34:29.168 "ddgst": false 00:34:29.168 }, 00:34:29.168 "method": "bdev_nvme_attach_controller" 00:34:29.168 },{ 00:34:29.168 "params": { 00:34:29.168 "name": "Nvme1", 00:34:29.168 "trtype": "tcp", 00:34:29.168 "traddr": "10.0.0.2", 00:34:29.168 "adrfam": "ipv4", 00:34:29.168 "trsvcid": "4420", 00:34:29.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.168 "hdgst": false, 00:34:29.168 "ddgst": false 00:34:29.168 }, 00:34:29.168 "method": "bdev_nvme_attach_controller" 00:34:29.168 }' 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.168 04:17:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.168 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.168 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.168 fio-3.35 00:34:29.168 Starting 2 threads 00:34:29.168 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.128 00:34:39.128 filename0: (groupid=0, jobs=1): err= 0: pid=994982: Thu Jul 25 04:17:53 2024 00:34:39.128 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:34:39.128 slat (nsec): min=4890, max=30827, avg=9693.98, stdev=2782.60 00:34:39.128 clat (usec): min=40841, max=47752, avg=40997.44, stdev=433.66 00:34:39.128 lat (usec): min=40849, max=47766, avg=41007.13, stdev=433.55 00:34:39.128 clat percentiles (usec): 00:34:39.128 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:39.128 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:39.128 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:39.128 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:39.128 | 99.99th=[47973] 00:34:39.128 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:34:39.128 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:39.128 lat (msec) : 50=100.00% 00:34:39.128 cpu : usr=93.85%, sys=5.87%, ctx=9, majf=0, minf=193 00:34:39.128 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.128 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.128 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:39.128 filename1: (groupid=0, jobs=1): err= 0: pid=994983: Thu Jul 25 04:17:53 2024 00:34:39.128 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:39.128 slat (nsec): min=4637, max=21938, avg=9679.80, stdev=2609.89 00:34:39.128 clat (usec): min=40894, max=47847, avg=41001.89, stdev=442.56 00:34:39.128 lat (usec): min=40902, max=47864, avg=41011.57, stdev=442.82 00:34:39.128 clat percentiles (usec): 00:34:39.128 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:39.128 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:39.128 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:39.128 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:34:39.128 | 99.99th=[47973] 00:34:39.128 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:34:39.129 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:39.129 lat (msec) : 50=100.00% 00:34:39.129 cpu : usr=93.20%, sys=6.49%, ctx=22, majf=0, minf=65 00:34:39.129 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.129 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.129 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:39.129 00:34:39.129 Run status group 0 (all jobs): 00:34:39.129 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10011-10012msec 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 00:34:39.129 real 0m11.294s 00:34:39.129 user 0m20.049s 00:34:39.129 sys 0m1.505s 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 ************************************ 00:34:39.129 END TEST fio_dif_1_multi_subsystems 00:34:39.129 ************************************ 00:34:39.129 04:17:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:39.129 04:17:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:39.129 04:17:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 ************************************ 00:34:39.129 START TEST fio_dif_rand_params 00:34:39.129 ************************************ 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 bdev_null0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.129 [2024-07-25 04:17:54.316224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:39.129 { 00:34:39.129 "params": { 00:34:39.129 "name": "Nvme$subsystem", 00:34:39.129 "trtype": "$TEST_TRANSPORT", 00:34:39.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.129 "adrfam": "ipv4", 00:34:39.129 "trsvcid": "$NVMF_PORT", 00:34:39.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.129 "hdgst": ${hdgst:-false}, 00:34:39.129 "ddgst": ${ddgst:-false} 00:34:39.129 }, 00:34:39.129 "method": "bdev_nvme_attach_controller" 00:34:39.129 } 00:34:39.129 EOF 00:34:39.129 )") 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:39.129 04:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:39.129 "params": { 00:34:39.129 "name": "Nvme0", 00:34:39.129 "trtype": "tcp", 00:34:39.129 "traddr": "10.0.0.2", 00:34:39.129 "adrfam": "ipv4", 00:34:39.129 "trsvcid": "4420", 00:34:39.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:39.129 "hdgst": false, 00:34:39.129 "ddgst": false 00:34:39.129 }, 00:34:39.130 "method": "bdev_nvme_attach_controller" 00:34:39.130 }' 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:39.130 04:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.387 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:39.387 ... 00:34:39.387 fio-3.35 00:34:39.387 Starting 3 threads 00:34:39.387 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.940 00:34:45.940 filename0: (groupid=0, jobs=1): err= 0: pid=996378: Thu Jul 25 04:18:00 2024 00:34:45.940 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(131MiB/5041msec) 00:34:45.940 slat (nsec): min=7208, max=53484, avg=16125.08, stdev=5356.45 00:34:45.940 clat (usec): min=5062, max=89208, avg=14433.42, stdev=12862.28 00:34:45.940 lat (usec): min=5074, max=89225, avg=14449.54, stdev=12862.56 00:34:45.940 clat percentiles (usec): 00:34:45.940 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 8356], 00:34:45.940 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10814], 00:34:45.940 | 70.00th=[12387], 80.00th=[13960], 90.00th=[47449], 95.00th=[50070], 00:34:45.940 | 99.00th=[53740], 99.50th=[55313], 99.90th=[57934], 99.95th=[89654], 00:34:45.940 | 99.99th=[89654] 00:34:45.940 bw ( KiB/s): min=20736, max=36352, per=34.48%, avg=26700.80, stdev=4027.97, samples=10 00:34:45.940 iops : min= 162, max= 284, avg=208.60, stdev=31.47, samples=10 00:34:45.940 lat (msec) : 10=50.38%, 20=39.10%, 50=5.26%, 100=5.26% 00:34:45.940 cpu : usr=93.85%, sys=5.71%, ctx=9, majf=0, minf=87 00:34:45.940 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.940 filename0: (groupid=0, jobs=1): err= 0: pid=996379: Thu Jul 25 04:18:00 2024 00:34:45.940 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(129MiB/5002msec) 00:34:45.940 slat (nsec): min=5438, max=39053, avg=15630.88, stdev=4427.92 00:34:45.940 clat (usec): min=5275, max=93858, avg=14519.54, stdev=13622.39 00:34:45.940 lat (usec): min=5288, max=93876, avg=14535.17, stdev=13622.34 00:34:45.940 clat percentiles (usec): 00:34:45.940 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 8094], 00:34:45.940 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10683], 00:34:45.940 | 70.00th=[12125], 80.00th=[13042], 90.00th=[49021], 95.00th=[50594], 00:34:45.940 | 99.00th=[53740], 99.50th=[54264], 99.90th=[88605], 99.95th=[93848], 00:34:45.940 | 99.99th=[93848] 00:34:45.940 bw ( KiB/s): min=17955, max=37376, per=34.02%, avg=26345.90, stdev=6419.84, samples=10 00:34:45.940 iops : min= 140, max= 292, avg=205.80, stdev=50.19, samples=10 00:34:45.940 lat (msec) : 10=53.97%, 20=34.59%, 50=3.97%, 100=7.46% 00:34:45.940 cpu : usr=95.08%, sys=4.30%, ctx=12, majf=0, minf=56 00:34:45.940 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.940 filename0: (groupid=0, jobs=1): err= 0: pid=996380: Thu Jul 25 04:18:00 2024 00:34:45.940 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(122MiB/5045msec) 00:34:45.940 slat (nsec): min=5109, max=82986, avg=15086.51, stdev=4817.72 00:34:45.940 clat (usec): min=4930, max=88865, avg=15472.63, stdev=15012.12 00:34:45.940 lat (usec): min=4945, max=88878, avg=15487.71, stdev=15012.31 00:34:45.940 clat percentiles (usec): 00:34:45.940 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 7635], 00:34:45.940 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10683], 00:34:45.940 | 70.00th=[11863], 80.00th=[12780], 90.00th=[49546], 95.00th=[51643], 00:34:45.940 | 99.00th=[53216], 99.50th=[53216], 99.90th=[88605], 99.95th=[88605], 00:34:45.940 | 99.99th=[88605] 00:34:45.940 bw ( KiB/s): min=17920, max=30208, per=32.13%, avg=24883.20, stdev=4070.68, samples=10 00:34:45.940 iops : min= 140, max= 236, avg=194.40, stdev=31.80, samples=10 00:34:45.940 lat (msec) : 10=55.44%, 20=29.67%, 50=5.34%, 100=9.55% 00:34:45.940 cpu : usr=94.94%, sys=4.42%, ctx=12, majf=0, minf=142 00:34:45.940 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.940 issued rwts: total=974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.940 00:34:45.940 Run status group 0 (all jobs): 00:34:45.940 READ: bw=75.6MiB/s (79.3MB/s), 24.1MiB/s-25.9MiB/s (25.3MB/s-27.2MB/s), io=382MiB (400MB), run=5002-5045msec 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.940 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 bdev_null0 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 [2024-07-25 04:18:00.520006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 bdev_null1 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 bdev_null2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:45.941 { 00:34:45.941 "params": { 00:34:45.941 "name": "Nvme$subsystem", 00:34:45.941 "trtype": "$TEST_TRANSPORT", 00:34:45.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.941 "adrfam": "ipv4", 00:34:45.941 "trsvcid": "$NVMF_PORT", 00:34:45.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.941 "hdgst": ${hdgst:-false}, 00:34:45.941 "ddgst": ${ddgst:-false} 00:34:45.941 }, 00:34:45.941 "method": "bdev_nvme_attach_controller" 00:34:45.941 } 00:34:45.941 EOF 00:34:45.941 )") 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:45.941 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:45.942 { 00:34:45.942 "params": { 00:34:45.942 "name": "Nvme$subsystem", 00:34:45.942 "trtype": "$TEST_TRANSPORT", 00:34:45.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.942 "adrfam": "ipv4", 00:34:45.942 "trsvcid": "$NVMF_PORT", 00:34:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.942 "hdgst": ${hdgst:-false}, 00:34:45.942 "ddgst": ${ddgst:-false} 00:34:45.942 }, 00:34:45.942 "method": "bdev_nvme_attach_controller" 00:34:45.942 } 00:34:45.942 EOF 00:34:45.942 )") 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:45.942 { 00:34:45.942 "params": { 00:34:45.942 "name": "Nvme$subsystem", 00:34:45.942 "trtype": "$TEST_TRANSPORT", 00:34:45.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.942 "adrfam": "ipv4", 00:34:45.942 "trsvcid": "$NVMF_PORT", 00:34:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.942 "hdgst": ${hdgst:-false}, 00:34:45.942 "ddgst": ${ddgst:-false} 00:34:45.942 }, 00:34:45.942 "method": "bdev_nvme_attach_controller" 00:34:45.942 } 00:34:45.942 EOF 00:34:45.942 )") 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:45.942 "params": { 00:34:45.942 "name": "Nvme0", 00:34:45.942 "trtype": "tcp", 00:34:45.942 "traddr": "10.0.0.2", 00:34:45.942 "adrfam": "ipv4", 00:34:45.942 "trsvcid": "4420", 00:34:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:45.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:45.942 "hdgst": false, 00:34:45.942 "ddgst": false 00:34:45.942 }, 00:34:45.942 "method": "bdev_nvme_attach_controller" 00:34:45.942 },{ 00:34:45.942 "params": { 00:34:45.942 "name": "Nvme1", 00:34:45.942 "trtype": "tcp", 00:34:45.942 "traddr": "10.0.0.2", 00:34:45.942 "adrfam": "ipv4", 00:34:45.942 "trsvcid": "4420", 00:34:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:45.942 "hdgst": false, 00:34:45.942 "ddgst": false 00:34:45.942 }, 00:34:45.942 "method": "bdev_nvme_attach_controller" 00:34:45.942 },{ 00:34:45.942 "params": { 00:34:45.942 "name": "Nvme2", 00:34:45.942 "trtype": "tcp", 00:34:45.942 "traddr": "10.0.0.2", 00:34:45.942 "adrfam": "ipv4", 00:34:45.942 "trsvcid": "4420", 00:34:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:45.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:45.942 "hdgst": false, 00:34:45.942 "ddgst": false 00:34:45.942 }, 00:34:45.942 "method": "bdev_nvme_attach_controller" 00:34:45.942 }' 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:45.942 04:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.942 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.942 ... 00:34:45.942 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.942 ... 00:34:45.942 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.942 ... 00:34:45.942 fio-3.35 00:34:45.942 Starting 24 threads 00:34:45.942 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.163 00:34:58.163 filename0: (groupid=0, jobs=1): err= 0: pid=997337: Thu Jul 25 04:18:11 2024 00:34:58.163 read: IOPS=204, BW=819KiB/s (839kB/s)(8352KiB/10192msec) 00:34:58.163 slat (usec): min=7, max=130, avg=48.33, stdev=28.19 00:34:58.163 clat (msec): min=31, max=416, avg=77.18, stdev=88.39 00:34:58.163 lat (msec): min=31, max=416, avg=77.23, stdev=88.38 00:34:58.163 clat percentiles (msec): 00:34:58.163 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.163 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.163 | 70.00th=[ 34], 80.00th=[ 174], 90.00th=[ 251], 95.00th=[ 262], 00:34:58.163 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 418], 99.95th=[ 418], 00:34:58.163 | 99.99th=[ 418] 00:34:58.163 bw ( KiB/s): min= 224, max= 2048, per=4.35%, avg=828.80, stdev=797.28, samples=20 00:34:58.163 iops : min= 56, max= 512, avg=207.20, stdev=199.32, samples=20 00:34:58.163 lat (msec) : 50=78.93%, 100=0.77%, 250=10.44%, 500=9.87% 00:34:58.163 cpu : usr=95.32%, sys=2.65%, ctx=246, majf=0, minf=53 00:34:58.163 IO depths : 1=5.2%, 2=10.5%, 4=22.1%, 8=54.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:58.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.163 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.163 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.163 filename0: (groupid=0, jobs=1): err= 0: pid=997338: Thu Jul 25 04:18:11 2024 00:34:58.163 read: IOPS=201, BW=805KiB/s (824kB/s)(8192KiB/10181msec) 00:34:58.163 slat (usec): min=8, max=129, avg=40.76, stdev=31.51 00:34:58.163 clat (msec): min=30, max=354, avg=79.14, stdev=91.56 00:34:58.163 lat (msec): min=30, max=354, avg=79.18, stdev=91.55 00:34:58.163 clat percentiles (msec): 00:34:58.163 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.163 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.163 | 70.00th=[ 34], 80.00th=[ 159], 90.00th=[ 257], 95.00th=[ 266], 00:34:58.163 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 355], 00:34:58.163 | 99.99th=[ 355] 00:34:58.163 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=812.80, stdev=796.97, samples=20 00:34:58.163 iops : min= 32, max= 480, avg=203.20, stdev=199.24, samples=20 00:34:58.163 lat (msec) : 50=78.91%, 250=8.79%, 500=12.30% 00:34:58.163 cpu : usr=97.53%, sys=1.88%, ctx=40, majf=0, minf=48 00:34:58.163 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:58.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.163 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997339: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=201, BW=807KiB/s (827kB/s)(8216KiB/10176msec) 00:34:58.164 slat (nsec): min=8262, max=99439, avg=33744.11, stdev=22160.26 00:34:58.164 clat (msec): min=23, max=377, avg=78.52, stdev=88.40 00:34:58.164 lat (msec): min=23, max=377, avg=78.55, stdev=88.39 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 190], 90.00th=[ 249], 95.00th=[ 262], 00:34:58.164 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 376], 99.95th=[ 376], 00:34:58.164 | 99.99th=[ 376] 00:34:58.164 bw ( KiB/s): min= 192, max= 1920, per=4.28%, avg=815.20, stdev=784.70, samples=20 00:34:58.164 iops : min= 48, max= 480, avg=203.80, stdev=196.18, samples=20 00:34:58.164 lat (msec) : 50=77.90%, 100=0.78%, 250=12.85%, 500=8.47% 00:34:58.164 cpu : usr=97.03%, sys=1.81%, ctx=52, majf=0, minf=63 00:34:58.164 IO depths : 1=4.4%, 2=9.7%, 4=22.3%, 8=55.5%, 16=8.1%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997340: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=201, BW=804KiB/s (824kB/s)(8184KiB/10176msec) 00:34:58.164 slat (usec): min=8, max=132, avg=31.83, stdev=12.45 00:34:58.164 clat (msec): min=28, max=434, avg=79.23, stdev=92.04 00:34:58.164 lat (msec): min=28, max=434, avg=79.26, stdev=92.03 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 159], 90.00th=[ 257], 95.00th=[ 266], 00:34:58.164 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 435], 00:34:58.164 | 99.99th=[ 435] 00:34:58.164 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=812.00, stdev=797.43, samples=20 00:34:58.164 iops : min= 36, max= 480, avg=203.00, stdev=199.36, samples=20 00:34:58.164 lat (msec) : 50=78.98%, 250=8.60%, 500=12.41% 00:34:58.164 cpu : usr=95.91%, sys=2.57%, ctx=193, majf=0, minf=51 00:34:58.164 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997341: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=188, BW=756KiB/s (774kB/s)(7680KiB/10159msec) 00:34:58.164 slat (usec): min=8, max=131, avg=45.54, stdev=23.69 00:34:58.164 clat (msec): min=31, max=461, avg=84.24, stdev=118.67 00:34:58.164 lat (msec): min=31, max=461, avg=84.28, stdev=118.66 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 351], 95.00th=[ 380], 00:34:58.164 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 464], 99.95th=[ 464], 00:34:58.164 | 99.99th=[ 464] 00:34:58.164 bw ( KiB/s): min= 128, max= 2048, per=4.00%, avg=761.60, stdev=826.91, samples=20 00:34:58.164 iops : min= 32, max= 512, avg=190.40, stdev=206.73, samples=20 00:34:58.164 lat (msec) : 50=83.33%, 250=1.77%, 500=14.90% 00:34:58.164 cpu : usr=96.99%, sys=1.88%, ctx=51, majf=0, minf=40 00:34:58.164 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997342: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=201, BW=805KiB/s (824kB/s)(8176KiB/10159msec) 00:34:58.164 slat (usec): min=8, max=113, avg=30.35, stdev=21.64 00:34:58.164 clat (msec): min=23, max=397, avg=79.25, stdev=91.93 00:34:58.164 lat (msec): min=23, max=397, avg=79.28, stdev=91.92 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 165], 90.00th=[ 251], 95.00th=[ 266], 00:34:58.164 | 99.00th=[ 338], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:34:58.164 | 99.99th=[ 397] 00:34:58.164 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=811.20, stdev=784.54, samples=20 00:34:58.164 iops : min= 36, max= 480, avg=202.80, stdev=196.14, samples=20 00:34:58.164 lat (msec) : 50=78.28%, 100=0.78%, 250=10.08%, 500=10.86% 00:34:58.164 cpu : usr=95.84%, sys=2.37%, ctx=108, majf=0, minf=42 00:34:58.164 IO depths : 1=3.7%, 2=9.2%, 4=23.0%, 8=55.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997343: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=189, BW=760KiB/s (778kB/s)(7680KiB/10111msec) 00:34:58.164 slat (usec): min=8, max=116, avg=50.27, stdev=24.75 00:34:58.164 clat (msec): min=27, max=514, avg=83.81, stdev=118.35 00:34:58.164 lat (msec): min=27, max=514, avg=83.86, stdev=118.34 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 351], 95.00th=[ 380], 00:34:58.164 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 514], 99.95th=[ 514], 00:34:58.164 | 99.99th=[ 514] 00:34:58.164 bw ( KiB/s): min= 128, max= 2048, per=4.00%, avg=761.60, stdev=826.68, samples=20 00:34:58.164 iops : min= 32, max= 512, avg=190.40, stdev=206.67, samples=20 00:34:58.164 lat (msec) : 50=83.33%, 250=2.40%, 500=13.96%, 750=0.31% 00:34:58.164 cpu : usr=96.93%, sys=1.91%, ctx=87, majf=0, minf=44 00:34:58.164 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=997344: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=195, BW=783KiB/s (802kB/s)(7960KiB/10169msec) 00:34:58.164 slat (usec): min=5, max=113, avg=44.69, stdev=23.03 00:34:58.164 clat (msec): min=27, max=503, avg=81.37, stdev=102.60 00:34:58.164 lat (msec): min=27, max=503, avg=81.41, stdev=102.60 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.164 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.164 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 259], 95.00th=[ 342], 00:34:58.164 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 506], 99.95th=[ 506], 00:34:58.164 | 99.99th=[ 506] 00:34:58.164 bw ( KiB/s): min= 128, max= 2048, per=4.14%, avg=789.60, stdev=805.59, samples=20 00:34:58.164 iops : min= 32, max= 512, avg=197.40, stdev=201.40, samples=20 00:34:58.164 lat (msec) : 50=80.40%, 250=5.43%, 500=14.07%, 750=0.10% 00:34:58.164 cpu : usr=98.06%, sys=1.47%, ctx=38, majf=0, minf=47 00:34:58.164 IO depths : 1=5.5%, 2=11.6%, 4=24.5%, 8=51.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.164 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.164 filename1: (groupid=0, jobs=1): err= 0: pid=997345: Thu Jul 25 04:18:11 2024 00:34:58.164 read: IOPS=205, BW=821KiB/s (841kB/s)(8368KiB/10192msec) 00:34:58.164 slat (nsec): min=6589, max=94188, avg=14705.71, stdev=12357.85 00:34:58.164 clat (msec): min=17, max=388, avg=77.49, stdev=87.74 00:34:58.164 lat (msec): min=17, max=388, avg=77.50, stdev=87.74 00:34:58.164 clat percentiles (msec): 00:34:58.164 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 174], 90.00th=[ 251], 95.00th=[ 266], 00:34:58.165 | 99.00th=[ 284], 99.50th=[ 326], 99.90th=[ 388], 99.95th=[ 388], 00:34:58.165 | 99.99th=[ 388] 00:34:58.165 bw ( KiB/s): min= 176, max= 2048, per=4.36%, avg=830.40, stdev=796.45, samples=20 00:34:58.165 iops : min= 44, max= 512, avg=207.60, stdev=199.11, samples=20 00:34:58.165 lat (msec) : 20=0.10%, 50=78.78%, 100=0.67%, 250=10.80%, 500=9.66% 00:34:58.165 cpu : usr=97.55%, sys=1.71%, ctx=100, majf=0, minf=50 00:34:58.165 IO depths : 1=5.2%, 2=10.6%, 4=22.4%, 8=54.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997346: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=202, BW=809KiB/s (829kB/s)(8240KiB/10181msec) 00:34:58.165 slat (usec): min=7, max=112, avg=36.29, stdev=25.97 00:34:58.165 clat (msec): min=17, max=356, avg=78.44, stdev=88.59 00:34:58.165 lat (msec): min=17, max=356, avg=78.48, stdev=88.58 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 171], 90.00th=[ 251], 95.00th=[ 262], 00:34:58.165 | 99.00th=[ 284], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 359], 00:34:58.165 | 99.99th=[ 359] 00:34:58.165 bw ( KiB/s): min= 128, max= 1920, per=4.29%, avg=817.60, stdev=793.56, samples=20 00:34:58.165 iops : min= 32, max= 480, avg=204.40, stdev=198.39, samples=20 00:34:58.165 lat (msec) : 20=0.19%, 50=78.25%, 250=11.46%, 500=10.10% 00:34:58.165 cpu : usr=97.68%, sys=1.66%, ctx=115, majf=0, minf=49 00:34:58.165 IO depths : 1=5.0%, 2=10.2%, 4=22.0%, 8=55.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997347: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=203, BW=813KiB/s (832kB/s)(8280KiB/10190msec) 00:34:58.165 slat (usec): min=6, max=120, avg=47.78, stdev=29.19 00:34:58.165 clat (msec): min=31, max=409, avg=78.34, stdev=90.31 00:34:58.165 lat (msec): min=31, max=410, avg=78.39, stdev=90.30 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 188], 90.00th=[ 251], 95.00th=[ 262], 00:34:58.165 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 409], 99.95th=[ 409], 00:34:58.165 | 99.99th=[ 409] 00:34:58.165 bw ( KiB/s): min= 208, max= 2048, per=4.31%, avg=821.60, stdev=791.40, samples=20 00:34:58.165 iops : min= 52, max= 512, avg=205.40, stdev=197.85, samples=20 00:34:58.165 lat (msec) : 50=78.84%, 100=0.77%, 250=9.86%, 500=10.53% 00:34:58.165 cpu : usr=98.00%, sys=1.32%, ctx=31, majf=0, minf=57 00:34:58.165 IO depths : 1=5.1%, 2=10.3%, 4=22.6%, 8=54.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=2070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997348: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=199, BW=798KiB/s (817kB/s)(8120KiB/10173msec) 00:34:58.165 slat (usec): min=8, max=106, avg=35.32, stdev=22.85 00:34:58.165 clat (msec): min=24, max=394, avg=79.76, stdev=93.94 00:34:58.165 lat (msec): min=24, max=394, avg=79.80, stdev=93.93 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 134], 90.00th=[ 257], 95.00th=[ 268], 00:34:58.165 | 99.00th=[ 342], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 397], 00:34:58.165 | 99.99th=[ 397] 00:34:58.165 bw ( KiB/s): min= 144, max= 2048, per=4.23%, avg=805.60, stdev=792.90, samples=20 00:34:58.165 iops : min= 36, max= 512, avg=201.40, stdev=198.22, samples=20 00:34:58.165 lat (msec) : 50=78.82%, 100=0.69%, 250=8.77%, 500=11.72% 00:34:58.165 cpu : usr=96.50%, sys=2.18%, ctx=74, majf=0, minf=48 00:34:58.165 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997349: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=202, BW=811KiB/s (831kB/s)(8256KiB/10176msec) 00:34:58.165 slat (usec): min=8, max=111, avg=34.73, stdev=22.00 00:34:58.165 clat (msec): min=31, max=365, avg=78.59, stdev=88.65 00:34:58.165 lat (msec): min=31, max=365, avg=78.62, stdev=88.64 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 178], 90.00th=[ 251], 95.00th=[ 264], 00:34:58.165 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 368], 00:34:58.165 | 99.99th=[ 368] 00:34:58.165 bw ( KiB/s): min= 144, max= 1920, per=4.30%, avg=819.20, stdev=792.65, samples=20 00:34:58.165 iops : min= 36, max= 480, avg=204.80, stdev=198.16, samples=20 00:34:58.165 lat (msec) : 50=78.29%, 250=11.82%, 500=9.88% 00:34:58.165 cpu : usr=98.08%, sys=1.42%, ctx=71, majf=0, minf=42 00:34:58.165 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997350: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=192, BW=769KiB/s (787kB/s)(7808KiB/10160msec) 00:34:58.165 slat (usec): min=8, max=109, avg=41.57, stdev=22.49 00:34:58.165 clat (msec): min=27, max=505, avg=82.90, stdev=111.72 00:34:58.165 lat (msec): min=27, max=505, avg=82.94, stdev=111.71 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 326], 95.00th=[ 368], 00:34:58.165 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 506], 99.95th=[ 506], 00:34:58.165 | 99.99th=[ 506] 00:34:58.165 bw ( KiB/s): min= 128, max= 2048, per=4.06%, avg=774.25, stdev=817.11, samples=20 00:34:58.165 iops : min= 32, max= 512, avg=193.55, stdev=204.27, samples=20 00:34:58.165 lat (msec) : 50=81.97%, 250=3.38%, 500=14.55%, 750=0.10% 00:34:58.165 cpu : usr=97.94%, sys=1.43%, ctx=73, majf=0, minf=48 00:34:58.165 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.165 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.165 filename1: (groupid=0, jobs=1): err= 0: pid=997351: Thu Jul 25 04:18:11 2024 00:34:58.165 read: IOPS=194, BW=780KiB/s (799kB/s)(7936KiB/10175msec) 00:34:58.165 slat (usec): min=8, max=124, avg=45.87, stdev=24.79 00:34:58.165 clat (msec): min=25, max=432, avg=81.66, stdev=103.41 00:34:58.165 lat (msec): min=25, max=432, avg=81.71, stdev=103.41 00:34:58.165 clat percentiles (msec): 00:34:58.165 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.165 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.165 | 70.00th=[ 34], 80.00th=[ 43], 90.00th=[ 271], 95.00th=[ 342], 00:34:58.165 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:34:58.165 | 99.99th=[ 435] 00:34:58.165 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=787.20, stdev=806.52, samples=20 00:34:58.165 iops : min= 32, max= 480, avg=196.80, stdev=201.63, samples=20 00:34:58.165 lat (msec) : 50=80.65%, 250=4.84%, 500=14.52% 00:34:58.165 cpu : usr=97.44%, sys=1.68%, ctx=99, majf=0, minf=45 00:34:58.165 IO depths : 1=5.0%, 2=11.1%, 4=24.7%, 8=51.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:58.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename1: (groupid=0, jobs=1): err= 0: pid=997352: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=205, BW=821KiB/s (841kB/s)(8368KiB/10192msec) 00:34:58.166 slat (usec): min=6, max=115, avg=38.63, stdev=27.06 00:34:58.166 clat (msec): min=15, max=408, avg=77.29, stdev=87.94 00:34:58.166 lat (msec): min=15, max=408, avg=77.33, stdev=87.92 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 34], 80.00th=[ 174], 90.00th=[ 249], 95.00th=[ 262], 00:34:58.166 | 99.00th=[ 292], 99.50th=[ 326], 99.90th=[ 409], 99.95th=[ 409], 00:34:58.166 | 99.99th=[ 409] 00:34:58.166 bw ( KiB/s): min= 176, max= 2048, per=4.36%, avg=830.40, stdev=796.45, samples=20 00:34:58.166 iops : min= 44, max= 512, avg=207.60, stdev=199.11, samples=20 00:34:58.166 lat (msec) : 20=0.10%, 50=78.68%, 100=0.76%, 250=10.99%, 500=9.46% 00:34:58.166 cpu : usr=97.45%, sys=1.79%, ctx=90, majf=0, minf=56 00:34:58.166 IO depths : 1=5.0%, 2=10.1%, 4=21.6%, 8=55.8%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997353: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=201, BW=804KiB/s (824kB/s)(8184KiB/10176msec) 00:34:58.166 slat (nsec): min=8443, max=67411, avg=22308.69, stdev=11588.80 00:34:58.166 clat (msec): min=27, max=422, avg=79.32, stdev=92.01 00:34:58.166 lat (msec): min=27, max=422, avg=79.34, stdev=92.01 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 34], 80.00th=[ 159], 90.00th=[ 257], 95.00th=[ 266], 00:34:58.166 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 422], 00:34:58.166 | 99.99th=[ 422] 00:34:58.166 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=812.00, stdev=797.43, samples=20 00:34:58.166 iops : min= 36, max= 480, avg=203.00, stdev=199.36, samples=20 00:34:58.166 lat (msec) : 50=78.98%, 250=8.70%, 500=12.32% 00:34:58.166 cpu : usr=97.92%, sys=1.66%, ctx=25, majf=0, minf=57 00:34:58.166 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997354: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=190, BW=762KiB/s (780kB/s)(7744KiB/10167msec) 00:34:58.166 slat (nsec): min=8229, max=85084, avg=32744.44, stdev=12340.20 00:34:58.166 clat (msec): min=32, max=492, avg=83.73, stdev=115.50 00:34:58.166 lat (msec): min=32, max=492, avg=83.76, stdev=115.49 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 347], 95.00th=[ 380], 00:34:58.166 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 493], 99.95th=[ 493], 00:34:58.166 | 99.99th=[ 493] 00:34:58.166 bw ( KiB/s): min= 128, max= 2048, per=4.03%, avg=768.10, stdev=822.19, samples=20 00:34:58.166 iops : min= 32, max= 512, avg=192.00, stdev=205.53, samples=20 00:34:58.166 lat (msec) : 50=82.64%, 250=2.58%, 500=14.77% 00:34:58.166 cpu : usr=98.02%, sys=1.46%, ctx=29, majf=0, minf=45 00:34:58.166 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997355: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=205, BW=821KiB/s (840kB/s)(8368KiB/10198msec) 00:34:58.166 slat (usec): min=8, max=113, avg=41.75, stdev=28.02 00:34:58.166 clat (msec): min=31, max=370, avg=77.30, stdev=88.04 00:34:58.166 lat (msec): min=31, max=370, avg=77.34, stdev=88.02 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 34], 80.00th=[ 169], 90.00th=[ 251], 95.00th=[ 264], 00:34:58.166 | 99.00th=[ 284], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 372], 00:34:58.166 | 99.99th=[ 372] 00:34:58.166 bw ( KiB/s): min= 256, max= 2048, per=4.36%, avg=830.40, stdev=795.86, samples=20 00:34:58.166 iops : min= 64, max= 512, avg=207.60, stdev=198.97, samples=20 00:34:58.166 lat (msec) : 50=78.78%, 100=0.76%, 250=10.61%, 500=9.85% 00:34:58.166 cpu : usr=97.05%, sys=1.77%, ctx=27, majf=0, minf=33 00:34:58.166 IO depths : 1=5.0%, 2=10.3%, 4=22.0%, 8=55.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997356: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=189, BW=758KiB/s (776kB/s)(7704KiB/10165msec) 00:34:58.166 slat (usec): min=8, max=699, avg=45.55, stdev=45.65 00:34:58.166 clat (msec): min=14, max=535, avg=84.19, stdev=119.52 00:34:58.166 lat (msec): min=14, max=535, avg=84.24, stdev=119.51 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 35], 80.00th=[ 43], 90.00th=[ 347], 95.00th=[ 380], 00:34:58.166 | 99.00th=[ 422], 99.50th=[ 489], 99.90th=[ 535], 99.95th=[ 535], 00:34:58.166 | 99.99th=[ 535] 00:34:58.166 bw ( KiB/s): min= 128, max= 2064, per=4.01%, avg=764.10, stdev=832.92, samples=20 00:34:58.166 iops : min= 32, max= 516, avg=191.00, stdev=208.21, samples=20 00:34:58.166 lat (msec) : 20=0.83%, 50=82.66%, 250=2.18%, 500=13.91%, 750=0.42% 00:34:58.166 cpu : usr=96.22%, sys=2.28%, ctx=137, majf=0, minf=65 00:34:58.166 IO depths : 1=0.9%, 2=2.4%, 4=7.3%, 8=74.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997357: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=208, BW=834KiB/s (854kB/s)(8504KiB/10196msec) 00:34:58.166 slat (nsec): min=3685, max=53008, avg=12909.42, stdev=5156.12 00:34:58.166 clat (msec): min=5, max=342, avg=76.50, stdev=85.01 00:34:58.166 lat (msec): min=5, max=342, avg=76.51, stdev=85.01 00:34:58.166 clat percentiles (msec): 00:34:58.166 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.166 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.166 | 70.00th=[ 34], 80.00th=[ 178], 90.00th=[ 247], 95.00th=[ 262], 00:34:58.166 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 342], 00:34:58.166 | 99.99th=[ 342] 00:34:58.166 bw ( KiB/s): min= 240, max= 2048, per=4.43%, avg=844.00, stdev=798.68, samples=20 00:34:58.166 iops : min= 60, max= 512, avg=211.00, stdev=199.67, samples=20 00:34:58.166 lat (msec) : 10=0.75%, 50=77.61%, 100=0.66%, 250=12.14%, 500=8.84% 00:34:58.166 cpu : usr=98.21%, sys=1.40%, ctx=15, majf=0, minf=87 00:34:58.166 IO depths : 1=5.0%, 2=11.2%, 4=24.8%, 8=51.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:58.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.166 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.166 filename2: (groupid=0, jobs=1): err= 0: pid=997358: Thu Jul 25 04:18:11 2024 00:34:58.166 read: IOPS=200, BW=801KiB/s (820kB/s)(8112KiB/10127msec) 00:34:58.166 slat (usec): min=8, max=187, avg=33.10, stdev=18.12 00:34:58.166 clat (msec): min=32, max=381, avg=79.63, stdev=95.45 00:34:58.167 lat (msec): min=32, max=381, avg=79.66, stdev=95.45 00:34:58.167 clat percentiles (msec): 00:34:58.167 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.167 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:58.167 | 70.00th=[ 34], 80.00th=[ 113], 90.00th=[ 253], 95.00th=[ 268], 00:34:58.167 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:34:58.167 | 99.99th=[ 380] 00:34:58.167 bw ( KiB/s): min= 128, max= 1920, per=4.22%, avg=804.80, stdev=803.03, samples=20 00:34:58.167 iops : min= 32, max= 480, avg=201.20, stdev=200.76, samples=20 00:34:58.167 lat (msec) : 50=79.68%, 250=8.38%, 500=11.93% 00:34:58.167 cpu : usr=97.01%, sys=1.66%, ctx=64, majf=0, minf=60 00:34:58.167 IO depths : 1=5.3%, 2=10.9%, 4=23.2%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:58.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.167 filename2: (groupid=0, jobs=1): err= 0: pid=997359: Thu Jul 25 04:18:11 2024 00:34:58.167 read: IOPS=199, BW=799KiB/s (818kB/s)(8128KiB/10170msec) 00:34:58.167 slat (usec): min=8, max=122, avg=41.81, stdev=25.27 00:34:58.167 clat (msec): min=27, max=354, avg=79.69, stdev=92.06 00:34:58.167 lat (msec): min=27, max=354, avg=79.73, stdev=92.04 00:34:58.167 clat percentiles (msec): 00:34:58.167 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.167 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.167 | 70.00th=[ 34], 80.00th=[ 161], 90.00th=[ 257], 95.00th=[ 266], 00:34:58.167 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 355], 00:34:58.167 | 99.99th=[ 355] 00:34:58.167 bw ( KiB/s): min= 144, max= 2048, per=4.23%, avg=806.40, stdev=792.31, samples=20 00:34:58.167 iops : min= 36, max= 512, avg=201.60, stdev=198.08, samples=20 00:34:58.167 lat (msec) : 50=78.74%, 250=8.17%, 500=13.09% 00:34:58.167 cpu : usr=96.45%, sys=2.19%, ctx=110, majf=0, minf=55 00:34:58.167 IO depths : 1=5.2%, 2=11.4%, 4=24.8%, 8=51.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:58.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.167 filename2: (groupid=0, jobs=1): err= 0: pid=997360: Thu Jul 25 04:18:11 2024 00:34:58.167 read: IOPS=189, BW=760KiB/s (778kB/s)(7680KiB/10110msec) 00:34:58.167 slat (usec): min=6, max=121, avg=53.95, stdev=26.56 00:34:58.167 clat (msec): min=31, max=514, avg=83.77, stdev=118.93 00:34:58.167 lat (msec): min=31, max=514, avg=83.83, stdev=118.92 00:34:58.167 clat percentiles (msec): 00:34:58.167 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:34:58.167 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:58.167 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 359], 95.00th=[ 380], 00:34:58.167 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 514], 99.95th=[ 514], 00:34:58.167 | 99.99th=[ 514] 00:34:58.167 bw ( KiB/s): min= 128, max= 2048, per=4.00%, avg=761.70, stdev=826.99, samples=20 00:34:58.167 iops : min= 32, max= 512, avg=190.40, stdev=206.73, samples=20 00:34:58.167 lat (msec) : 50=83.33%, 250=2.60%, 500=13.96%, 750=0.10% 00:34:58.167 cpu : usr=96.52%, sys=2.19%, ctx=86, majf=0, minf=40 00:34:58.167 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:58.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.167 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.167 00:34:58.167 Run status group 0 (all jobs): 00:34:58.167 READ: bw=18.6MiB/s (19.5MB/s), 756KiB/s-834KiB/s (774kB/s-854kB/s), io=190MiB (199MB), run=10110-10198msec 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 bdev_null0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.167 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 [2024-07-25 04:18:12.278515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 bdev_null1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.168 { 00:34:58.168 "params": { 00:34:58.168 "name": "Nvme$subsystem", 00:34:58.168 "trtype": "$TEST_TRANSPORT", 00:34:58.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.168 "adrfam": "ipv4", 00:34:58.168 "trsvcid": "$NVMF_PORT", 00:34:58.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.168 "hdgst": ${hdgst:-false}, 00:34:58.168 "ddgst": ${ddgst:-false} 00:34:58.168 }, 00:34:58.168 "method": "bdev_nvme_attach_controller" 00:34:58.168 } 00:34:58.168 EOF 00:34:58.168 )") 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.168 { 00:34:58.168 "params": { 00:34:58.168 "name": "Nvme$subsystem", 00:34:58.168 "trtype": "$TEST_TRANSPORT", 00:34:58.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.168 "adrfam": "ipv4", 00:34:58.168 "trsvcid": "$NVMF_PORT", 00:34:58.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.168 "hdgst": ${hdgst:-false}, 00:34:58.168 "ddgst": ${ddgst:-false} 00:34:58.168 }, 00:34:58.168 "method": "bdev_nvme_attach_controller" 00:34:58.168 } 00:34:58.168 EOF 00:34:58.168 )") 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.168 "params": { 00:34:58.168 "name": "Nvme0", 00:34:58.168 "trtype": "tcp", 00:34:58.168 "traddr": "10.0.0.2", 00:34:58.168 "adrfam": "ipv4", 00:34:58.168 "trsvcid": "4420", 00:34:58.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.168 "hdgst": false, 00:34:58.168 "ddgst": false 00:34:58.168 }, 00:34:58.168 "method": "bdev_nvme_attach_controller" 00:34:58.168 },{ 00:34:58.168 "params": { 00:34:58.168 "name": "Nvme1", 00:34:58.168 "trtype": "tcp", 00:34:58.168 "traddr": "10.0.0.2", 00:34:58.168 "adrfam": "ipv4", 00:34:58.168 "trsvcid": "4420", 00:34:58.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.168 "hdgst": false, 00:34:58.168 "ddgst": false 00:34:58.168 }, 00:34:58.168 "method": "bdev_nvme_attach_controller" 00:34:58.168 }' 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.168 04:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.168 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:58.168 ... 00:34:58.168 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:58.168 ... 00:34:58.169 fio-3.35 00:34:58.169 Starting 4 threads 00:34:58.169 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.448 00:35:03.448 filename0: (groupid=0, jobs=1): err= 0: pid=999254: Thu Jul 25 04:18:18 2024 00:35:03.448 read: IOPS=1905, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5002msec) 00:35:03.448 slat (nsec): min=4107, max=46535, avg=12198.23, stdev=5367.15 00:35:03.448 clat (usec): min=1004, max=10131, avg=4160.92, stdev=621.84 00:35:03.448 lat (usec): min=1018, max=10152, avg=4173.12, stdev=621.92 00:35:03.448 clat percentiles (usec): 00:35:03.448 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3818], 00:35:03.448 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:03.448 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5276], 00:35:03.448 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 9896], 00:35:03.448 | 99.99th=[10159] 00:35:03.448 bw ( KiB/s): min=14704, max=15936, per=24.71%, avg=15241.60, stdev=333.34, samples=10 00:35:03.448 iops : min= 1838, max= 1992, avg=1905.20, stdev=41.67, samples=10 00:35:03.448 lat (msec) : 2=0.14%, 4=31.87%, 10=67.98%, 20=0.01% 00:35:03.448 cpu : usr=94.10%, sys=5.20%, ctx=10, majf=0, minf=0 00:35:03.448 IO depths : 1=0.1%, 2=3.5%, 4=67.5%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 issued rwts: total=9531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.448 filename0: (groupid=0, jobs=1): err= 0: pid=999255: Thu Jul 25 04:18:18 2024 00:35:03.448 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5001msec) 00:35:03.448 slat (nsec): min=4141, max=51457, avg=11414.56, stdev=4466.38 00:35:03.448 clat (usec): min=1105, max=8007, avg=4156.97, stdev=646.98 00:35:03.448 lat (usec): min=1118, max=8019, avg=4168.39, stdev=646.86 00:35:03.448 clat percentiles (usec): 00:35:03.448 | 1.00th=[ 2737], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3785], 00:35:03.448 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4113], 00:35:03.448 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5473], 00:35:03.448 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 7963], 00:35:03.448 | 99.99th=[ 8029] 00:35:03.448 bw ( KiB/s): min=14688, max=15680, per=24.84%, avg=15320.89, stdev=269.89, samples=9 00:35:03.448 iops : min= 1836, max= 1960, avg=1915.11, stdev=33.74, samples=9 00:35:03.448 lat (msec) : 2=0.12%, 4=33.11%, 10=66.78% 00:35:03.448 cpu : usr=94.44%, sys=4.96%, ctx=65, majf=0, minf=9 00:35:03.448 IO depths : 1=0.1%, 2=6.2%, 4=65.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 issued rwts: total=9542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.448 filename1: (groupid=0, jobs=1): err= 0: pid=999256: Thu Jul 25 04:18:18 2024 00:35:03.448 read: IOPS=1998, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5002msec) 00:35:03.448 slat (usec): min=4, max=233, avg=12.94, stdev= 5.25 00:35:03.448 clat (usec): min=1334, max=9359, avg=3963.02, stdev=569.02 00:35:03.448 lat (usec): min=1347, max=9372, avg=3975.96, stdev=569.04 00:35:03.448 clat percentiles (usec): 00:35:03.448 | 1.00th=[ 2638], 5.00th=[ 3032], 10.00th=[ 3261], 20.00th=[ 3589], 00:35:03.448 | 30.00th=[ 3785], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:35:03.448 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4883], 00:35:03.448 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7242], 99.95th=[ 9372], 00:35:03.448 | 99.99th=[ 9372] 00:35:03.448 bw ( KiB/s): min=15472, max=16736, per=25.91%, avg=15985.50, stdev=400.95, samples=10 00:35:03.448 iops : min= 1934, max= 2092, avg=1998.10, stdev=50.21, samples=10 00:35:03.448 lat (msec) : 2=0.18%, 4=43.58%, 10=56.24% 00:35:03.448 cpu : usr=92.16%, sys=6.20%, ctx=19, majf=0, minf=0 00:35:03.448 IO depths : 1=0.1%, 2=5.1%, 4=66.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 issued rwts: total=9994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.448 filename1: (groupid=0, jobs=1): err= 0: pid=999257: Thu Jul 25 04:18:18 2024 00:35:03.448 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5001msec) 00:35:03.448 slat (nsec): min=4320, max=44776, avg=11278.47, stdev=4298.61 00:35:03.448 clat (usec): min=971, max=7703, avg=4173.96, stdev=611.68 00:35:03.448 lat (usec): min=985, max=7723, avg=4185.24, stdev=611.73 00:35:03.448 clat percentiles (usec): 00:35:03.448 | 1.00th=[ 2802], 5.00th=[ 3326], 10.00th=[ 3621], 20.00th=[ 3851], 00:35:03.448 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:03.448 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5407], 00:35:03.448 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7242], 99.95th=[ 7504], 00:35:03.448 | 99.99th=[ 7701] 00:35:03.448 bw ( KiB/s): min=14624, max=15936, per=24.65%, avg=15205.11, stdev=408.67, samples=9 00:35:03.448 iops : min= 1828, max= 1992, avg=1900.56, stdev=51.07, samples=9 00:35:03.448 lat (usec) : 1000=0.01% 00:35:03.448 lat (msec) : 2=0.14%, 4=30.66%, 10=69.20% 00:35:03.448 cpu : usr=93.68%, sys=5.58%, ctx=6, majf=0, minf=9 00:35:03.448 IO depths : 1=0.1%, 2=6.8%, 4=63.8%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.448 issued rwts: total=9502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.448 00:35:03.448 Run status group 0 (all jobs): 00:35:03.448 READ: bw=60.2MiB/s (63.2MB/s), 14.8MiB/s-15.6MiB/s (15.6MB/s-16.4MB/s), io=301MiB (316MB), run=5001-5002msec 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.448 00:35:03.448 real 0m24.141s 00:35:03.448 user 4m34.532s 00:35:03.448 sys 0m7.275s 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 ************************************ 00:35:03.448 END TEST fio_dif_rand_params 00:35:03.448 ************************************ 00:35:03.448 04:18:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:03.448 04:18:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:03.448 04:18:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.448 04:18:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.448 ************************************ 00:35:03.448 START TEST fio_dif_digest 00:35:03.448 ************************************ 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 bdev_null0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 [2024-07-25 04:18:18.496967] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:03.449 { 00:35:03.449 "params": { 00:35:03.449 "name": "Nvme$subsystem", 00:35:03.449 "trtype": "$TEST_TRANSPORT", 00:35:03.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.449 "adrfam": "ipv4", 00:35:03.449 "trsvcid": "$NVMF_PORT", 00:35:03.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.449 "hdgst": ${hdgst:-false}, 00:35:03.449 "ddgst": ${ddgst:-false} 00:35:03.449 }, 00:35:03.449 "method": "bdev_nvme_attach_controller" 00:35:03.449 } 00:35:03.449 EOF 00:35:03.449 )") 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:03.449 "params": { 00:35:03.449 "name": "Nvme0", 00:35:03.449 "trtype": "tcp", 00:35:03.449 "traddr": "10.0.0.2", 00:35:03.449 "adrfam": "ipv4", 00:35:03.449 "trsvcid": "4420", 00:35:03.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.449 "hdgst": true, 00:35:03.449 "ddgst": true 00:35:03.449 }, 00:35:03.449 "method": "bdev_nvme_attach_controller" 00:35:03.449 }' 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.449 04:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.707 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:03.707 ... 00:35:03.707 fio-3.35 00:35:03.707 Starting 3 threads 00:35:03.707 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.901 00:35:15.901 filename0: (groupid=0, jobs=1): err= 0: pid=1000009: Thu Jul 25 04:18:29 2024 00:35:15.901 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10046msec) 00:35:15.901 slat (nsec): min=7721, max=64157, avg=21441.31, stdev=6521.45 00:35:15.901 clat (usec): min=8267, max=58437, avg=14851.20, stdev=3351.39 00:35:15.901 lat (usec): min=8283, max=58465, avg=14872.64, stdev=3351.51 00:35:15.901 clat percentiles (usec): 00:35:15.901 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12911], 20.00th=[13698], 00:35:15.901 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:35:15.901 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:35:15.901 | 99.00th=[18482], 99.50th=[46924], 99.90th=[56886], 99.95th=[57934], 00:35:15.901 | 99.99th=[58459] 00:35:15.901 bw ( KiB/s): min=22272, max=28928, per=33.27%, avg=25856.00, stdev=1656.99, samples=20 00:35:15.901 iops : min= 174, max= 226, avg=202.00, stdev=12.95, samples=20 00:35:15.901 lat (msec) : 10=2.27%, 20=97.13%, 50=0.15%, 100=0.44% 00:35:15.901 cpu : usr=93.17%, sys=5.87%, ctx=134, majf=0, minf=110 00:35:15.901 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.901 filename0: (groupid=0, jobs=1): err= 0: pid=1000010: Thu Jul 25 04:18:29 2024 00:35:15.901 read: IOPS=204, BW=25.6MiB/s (26.9MB/s)(257MiB/10043msec) 00:35:15.901 slat (usec): min=7, max=103, avg=16.63, stdev= 5.52 00:35:15.901 clat (usec): min=8215, max=59286, avg=14599.52, stdev=3069.19 00:35:15.901 lat (usec): min=8229, max=59299, avg=14616.14, stdev=3069.11 00:35:15.901 clat percentiles (usec): 00:35:15.901 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[12780], 20.00th=[13566], 00:35:15.901 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:35:15.901 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:35:15.901 | 99.00th=[18220], 99.50th=[19268], 99.90th=[58459], 99.95th=[58459], 00:35:15.901 | 99.99th=[59507] 00:35:15.901 bw ( KiB/s): min=23040, max=29184, per=33.87%, avg=26319.20, stdev=1646.48, samples=20 00:35:15.901 iops : min= 180, max= 228, avg=205.60, stdev=12.89, samples=20 00:35:15.901 lat (msec) : 10=2.48%, 20=97.08%, 50=0.05%, 100=0.39% 00:35:15.901 cpu : usr=93.15%, sys=6.38%, ctx=21, majf=0, minf=175 00:35:15.901 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.901 filename0: (groupid=0, jobs=1): err= 0: pid=1000011: Thu Jul 25 04:18:29 2024 00:35:15.901 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10046msec) 00:35:15.901 slat (nsec): min=6236, max=44457, avg=16036.01, stdev=4785.54 00:35:15.901 clat (usec): min=8514, max=57676, avg=14892.88, stdev=4502.02 00:35:15.901 lat (usec): min=8528, max=57689, avg=14908.92, stdev=4502.01 00:35:15.901 clat percentiles (usec): 00:35:15.901 | 1.00th=[10028], 5.00th=[12125], 10.00th=[12780], 20.00th=[13435], 00:35:15.901 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:35:15.901 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:35:15.901 | 99.00th=[52167], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:35:15.901 | 99.99th=[57934] 00:35:15.901 bw ( KiB/s): min=22016, max=28928, per=33.21%, avg=25804.80, stdev=1721.51, samples=20 00:35:15.901 iops : min= 172, max= 226, avg=201.60, stdev=13.45, samples=20 00:35:15.901 lat (msec) : 10=0.99%, 20=97.72%, 50=0.20%, 100=1.09% 00:35:15.901 cpu : usr=92.92%, sys=6.59%, ctx=24, majf=0, minf=71 00:35:15.901 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.901 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.901 00:35:15.901 Run status group 0 (all jobs): 00:35:15.901 READ: bw=75.9MiB/s (79.6MB/s), 25.1MiB/s-25.6MiB/s (26.3MB/s-26.9MB/s), io=762MiB (799MB), run=10043-10046msec 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.901 00:35:15.901 real 0m11.180s 00:35:15.901 user 0m29.167s 00:35:15.901 sys 0m2.203s 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.901 04:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.901 ************************************ 00:35:15.901 END TEST fio_dif_digest 00:35:15.901 ************************************ 00:35:15.901 04:18:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:15.901 04:18:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:15.901 rmmod nvme_tcp 00:35:15.901 rmmod nvme_fabrics 00:35:15.901 rmmod nvme_keyring 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 993355 ']' 00:35:15.901 04:18:29 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 993355 00:35:15.901 04:18:29 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 993355 ']' 00:35:15.901 04:18:29 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 993355 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993355 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993355' 00:35:15.902 killing process with pid 993355 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@969 -- # kill 993355 00:35:15.902 04:18:29 nvmf_dif -- common/autotest_common.sh@974 -- # wait 993355 00:35:15.902 04:18:29 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:15.902 04:18:29 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:15.902 Waiting for block devices as requested 00:35:15.902 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:15.902 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:16.160 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:16.160 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:16.160 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:16.160 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:16.417 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:16.417 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:16.417 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:16.417 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:16.675 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:16.675 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:16.675 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:16.675 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:16.932 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:16.932 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:16.932 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:17.190 04:18:32 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:17.190 04:18:32 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:17.190 04:18:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:17.190 04:18:32 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:17.190 04:18:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.190 04:18:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.190 04:18:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.088 04:18:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:19.088 00:35:19.088 real 1m6.076s 00:35:19.088 user 6m30.508s 00:35:19.088 sys 0m18.704s 00:35:19.088 04:18:34 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.088 04:18:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.088 ************************************ 00:35:19.088 END TEST nvmf_dif 00:35:19.088 ************************************ 00:35:19.088 04:18:34 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:19.088 04:18:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:19.088 04:18:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.088 04:18:34 -- common/autotest_common.sh@10 -- # set +x 00:35:19.088 ************************************ 00:35:19.088 START TEST nvmf_abort_qd_sizes 00:35:19.088 ************************************ 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:19.088 * Looking for test storage... 00:35:19.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.088 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:19.089 04:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:21.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:21.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:21.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:21.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:21.616 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:21.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:35:21.617 00:35:21.617 --- 10.0.0.2 ping statistics --- 00:35:21.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.617 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:35:21.617 00:35:21.617 --- 10.0.0.1 ping statistics --- 00:35:21.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.617 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:21.617 04:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:22.551 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:22.551 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:22.551 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.520 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1004792 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1004792 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1004792 ']' 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.520 04:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:23.778 [2024-07-25 04:18:38.831430] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:35:23.778 [2024-07-25 04:18:38.831510] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.778 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.778 [2024-07-25 04:18:38.868706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:23.778 [2024-07-25 04:18:38.897203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:23.778 [2024-07-25 04:18:38.993272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.778 [2024-07-25 04:18:38.993335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.778 [2024-07-25 04:18:38.993360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.778 [2024-07-25 04:18:38.993373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.778 [2024-07-25 04:18:38.993384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.778 [2024-07-25 04:18:38.993455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.778 [2024-07-25 04:18:38.993508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:23.778 [2024-07-25 04:18:38.993619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:23.778 [2024-07-25 04:18:38.993621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.036 04:18:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:24.036 ************************************ 00:35:24.036 START TEST spdk_target_abort 00:35:24.036 ************************************ 00:35:24.036 04:18:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:24.036 04:18:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:24.036 04:18:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:24.036 04:18:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.036 04:18:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.326 spdk_targetn1 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.326 [2024-07-25 04:18:42.021410] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.326 [2024-07-25 04:18:42.053686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.326 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:27.327 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.327 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.327 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.327 04:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.327 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.616 Initializing NVMe Controllers 00:35:30.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.616 Initialization complete. Launching workers. 00:35:30.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11871, failed: 0 00:35:30.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 10624 00:35:30.616 success 812, unsuccess 435, failed 0 00:35:30.616 04:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.616 04:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.616 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.892 Initializing NVMe Controllers 00:35:33.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:33.892 Initialization complete. Launching workers. 00:35:33.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8588, failed: 0 00:35:33.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1249, failed to submit 7339 00:35:33.892 success 322, unsuccess 927, failed 0 00:35:33.892 04:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:33.892 04:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.892 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.170 Initializing NVMe Controllers 00:35:37.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:37.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:37.170 Initialization complete. Launching workers. 00:35:37.170 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31940, failed: 0 00:35:37.170 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2684, failed to submit 29256 00:35:37.170 success 533, unsuccess 2151, failed 0 00:35:37.170 04:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:37.170 04:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.170 04:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.170 04:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.171 04:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:37.171 04:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.171 04:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1004792 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1004792 ']' 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1004792 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.103 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1004792 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1004792' 00:35:38.360 killing process with pid 1004792 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1004792 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1004792 00:35:38.360 00:35:38.360 real 0m14.449s 00:35:38.360 user 0m54.684s 00:35:38.360 sys 0m2.700s 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:38.360 04:18:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.360 ************************************ 00:35:38.360 END TEST spdk_target_abort 00:35:38.360 ************************************ 00:35:38.360 04:18:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:38.360 04:18:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:38.360 04:18:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:38.360 04:18:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.617 ************************************ 00:35:38.617 START TEST kernel_target_abort 00:35:38.617 ************************************ 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.617 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:38.618 04:18:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:39.551 Waiting for block devices as requested 00:35:39.551 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.809 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:39.809 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:39.809 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:40.067 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:40.067 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:40.067 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:40.067 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:40.325 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:40.325 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:40.325 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:40.325 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:40.584 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:40.584 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:40.584 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:40.584 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:40.842 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:40.842 No valid GPT data, bailing 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:40.842 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:41.100 00:35:41.100 Discovery Log Number of Records 2, Generation counter 2 00:35:41.100 =====Discovery Log Entry 0====== 00:35:41.100 trtype: tcp 00:35:41.100 adrfam: ipv4 00:35:41.100 subtype: current discovery subsystem 00:35:41.100 treq: not specified, sq flow control disable supported 00:35:41.100 portid: 1 00:35:41.100 trsvcid: 4420 00:35:41.100 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:41.100 traddr: 10.0.0.1 00:35:41.100 eflags: none 00:35:41.100 sectype: none 00:35:41.100 =====Discovery Log Entry 1====== 00:35:41.100 trtype: tcp 00:35:41.100 adrfam: ipv4 00:35:41.100 subtype: nvme subsystem 00:35:41.100 treq: not specified, sq flow control disable supported 00:35:41.100 portid: 1 00:35:41.100 trsvcid: 4420 00:35:41.100 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:41.100 traddr: 10.0.0.1 00:35:41.100 eflags: none 00:35:41.100 sectype: none 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.100 04:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:41.100 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.396 Initializing NVMe Controllers 00:35:44.396 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:44.396 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:44.396 Initialization complete. Launching workers. 00:35:44.396 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33415, failed: 0 00:35:44.396 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33415, failed to submit 0 00:35:44.396 success 0, unsuccess 33415, failed 0 00:35:44.396 04:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.396 04:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:44.396 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.675 Initializing NVMe Controllers 00:35:47.675 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:47.675 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:47.675 Initialization complete. Launching workers. 00:35:47.675 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65308, failed: 0 00:35:47.675 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16474, failed to submit 48834 00:35:47.675 success 0, unsuccess 16474, failed 0 00:35:47.675 04:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:47.675 04:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.675 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.950 Initializing NVMe Controllers 00:35:50.950 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:50.950 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:50.950 Initialization complete. Launching workers. 00:35:50.950 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63677, failed: 0 00:35:50.950 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15922, failed to submit 47755 00:35:50.950 success 0, unsuccess 15922, failed 0 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:50.950 04:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:51.515 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:51.515 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:51.515 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:52.449 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:52.707 00:35:52.707 real 0m14.136s 00:35:52.707 user 0m5.260s 00:35:52.707 sys 0m3.320s 00:35:52.707 04:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.707 04:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:52.707 ************************************ 00:35:52.707 END TEST kernel_target_abort 00:35:52.707 ************************************ 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:52.707 rmmod nvme_tcp 00:35:52.707 rmmod nvme_fabrics 00:35:52.707 rmmod nvme_keyring 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1004792 ']' 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1004792 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1004792 ']' 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1004792 00:35:52.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1004792) - No such process 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1004792 is not found' 00:35:52.707 Process with pid 1004792 is not found 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:52.707 04:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.642 Waiting for block devices as requested 00:35:53.642 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.899 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:53.899 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:54.157 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:54.157 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:54.157 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:54.157 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:54.414 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:54.414 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:54.414 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:54.414 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:54.671 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:54.671 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:54.671 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:54.671 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:54.929 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:54.929 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:54.929 04:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.457 04:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:57.457 00:35:57.457 real 0m37.911s 00:35:57.457 user 1m2.019s 00:35:57.457 sys 0m9.339s 00:35:57.457 04:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:57.457 04:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:57.457 ************************************ 00:35:57.457 END TEST nvmf_abort_qd_sizes 00:35:57.457 ************************************ 00:35:57.457 04:19:12 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:57.457 04:19:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:57.457 04:19:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:57.457 04:19:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.457 ************************************ 00:35:57.457 START TEST keyring_file 00:35:57.457 ************************************ 00:35:57.457 04:19:12 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:57.457 * Looking for test storage... 00:35:57.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.457 04:19:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.457 04:19:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.457 04:19:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.457 04:19:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.457 04:19:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.457 04:19:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.457 04:19:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:57.457 04:19:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qmcuGQJOOD 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:57.457 04:19:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qmcuGQJOOD 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qmcuGQJOOD 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qmcuGQJOOD 00:35:57.457 04:19:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tMZBgW8LkK 00:35:57.457 04:19:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:57.458 04:19:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:57.458 04:19:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tMZBgW8LkK 00:35:57.458 04:19:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tMZBgW8LkK 00:35:57.458 04:19:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tMZBgW8LkK 00:35:57.458 04:19:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=1010547 00:35:57.458 04:19:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:57.458 04:19:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1010547 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1010547 ']' 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.458 04:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.458 [2024-07-25 04:19:12.467042] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:35:57.458 [2024-07-25 04:19:12.467127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010547 ] 00:35:57.458 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.458 [2024-07-25 04:19:12.502995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:57.458 [2024-07-25 04:19:12.529881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.458 [2024-07-25 04:19:12.622661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:57.717 04:19:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.717 [2024-07-25 04:19:12.871069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.717 null0 00:35:57.717 [2024-07-25 04:19:12.903134] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.717 [2024-07-25 04:19:12.903644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.717 [2024-07-25 04:19:12.911128] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.717 04:19:12 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.717 [2024-07-25 04:19:12.923157] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:57.717 request: 00:35:57.717 { 00:35:57.717 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.717 "secure_channel": false, 00:35:57.717 "listen_address": { 00:35:57.717 "trtype": "tcp", 00:35:57.717 "traddr": "127.0.0.1", 00:35:57.717 "trsvcid": "4420" 00:35:57.717 }, 00:35:57.717 "method": "nvmf_subsystem_add_listener", 00:35:57.717 "req_id": 1 00:35:57.717 } 00:35:57.717 Got JSON-RPC error response 00:35:57.717 response: 00:35:57.717 { 00:35:57.717 "code": -32602, 00:35:57.717 "message": "Invalid parameters" 00:35:57.717 } 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:57.717 04:19:12 keyring_file -- keyring/file.sh@46 -- # bperfpid=1010557 00:35:57.717 04:19:12 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:57.717 04:19:12 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1010557 /var/tmp/bperf.sock 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1010557 ']' 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.717 04:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.717 [2024-07-25 04:19:12.971093] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:35:57.717 [2024-07-25 04:19:12.971156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010557 ] 00:35:57.717 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.717 [2024-07-25 04:19:13.002326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:57.975 [2024-07-25 04:19:13.032996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.975 [2024-07-25 04:19:13.124577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.975 04:19:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.975 04:19:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:57.975 04:19:13 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:35:57.975 04:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:35:58.234 04:19:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tMZBgW8LkK 00:35:58.234 04:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tMZBgW8LkK 00:35:58.491 04:19:13 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:58.491 04:19:13 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:58.491 04:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.491 04:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.491 04:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.754 04:19:13 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.qmcuGQJOOD == \/\t\m\p\/\t\m\p\.\q\m\c\u\G\Q\J\O\O\D ]] 00:35:58.754 04:19:13 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:58.754 04:19:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:58.754 04:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.754 04:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.754 04:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.037 04:19:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tMZBgW8LkK == \/\t\m\p\/\t\m\p\.\t\M\Z\B\g\W\8\L\k\K ]] 00:35:59.037 04:19:14 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:59.037 04:19:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.037 04:19:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.037 04:19:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.037 04:19:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.037 04:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.295 04:19:14 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:59.295 04:19:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:59.295 04:19:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:59.295 04:19:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.295 04:19:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.295 04:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.295 04:19:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.552 04:19:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:59.552 04:19:14 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:59.552 04:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:59.809 [2024-07-25 04:19:14.957633] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:59.809 nvme0n1 00:35:59.809 04:19:15 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:59.809 04:19:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.809 04:19:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.809 04:19:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.809 04:19:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.809 04:19:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:00.067 04:19:15 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:00.067 04:19:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:00.067 04:19:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:00.067 04:19:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:00.067 04:19:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:00.067 04:19:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.067 04:19:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:00.325 04:19:15 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:00.325 04:19:15 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:00.582 Running I/O for 1 seconds... 00:36:01.516 00:36:01.516 Latency(us) 00:36:01.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.516 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:01.516 nvme0n1 : 1.02 5102.64 19.93 0.00 0.00 24784.40 6165.24 32039.82 00:36:01.516 =================================================================================================================== 00:36:01.516 Total : 5102.64 19.93 0.00 0.00 24784.40 6165.24 32039.82 00:36:01.516 0 00:36:01.516 04:19:16 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:01.516 04:19:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:01.774 04:19:16 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:01.774 04:19:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:01.774 04:19:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.774 04:19:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.774 04:19:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.774 04:19:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.032 04:19:17 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:02.032 04:19:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:02.032 04:19:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:02.032 04:19:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.032 04:19:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.032 04:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.032 04:19:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:02.290 04:19:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:02.290 04:19:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.290 04:19:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.290 04:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.548 [2024-07-25 04:19:17.677751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:02.548 [2024-07-25 04:19:17.678042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b97b0 (107): Transport endpoint is not connected 00:36:02.548 [2024-07-25 04:19:17.679032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b97b0 (9): Bad file descriptor 00:36:02.548 [2024-07-25 04:19:17.680030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:02.548 [2024-07-25 04:19:17.680055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:02.549 [2024-07-25 04:19:17.680070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:02.549 request: 00:36:02.549 { 00:36:02.549 "name": "nvme0", 00:36:02.549 "trtype": "tcp", 00:36:02.549 "traddr": "127.0.0.1", 00:36:02.549 "adrfam": "ipv4", 00:36:02.549 "trsvcid": "4420", 00:36:02.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.549 "prchk_reftag": false, 00:36:02.549 "prchk_guard": false, 00:36:02.549 "hdgst": false, 00:36:02.549 "ddgst": false, 00:36:02.549 "psk": "key1", 00:36:02.549 "method": "bdev_nvme_attach_controller", 00:36:02.549 "req_id": 1 00:36:02.549 } 00:36:02.549 Got JSON-RPC error response 00:36:02.549 response: 00:36:02.549 { 00:36:02.549 "code": -5, 00:36:02.549 "message": "Input/output error" 00:36:02.549 } 00:36:02.549 04:19:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:02.549 04:19:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:02.549 04:19:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:02.549 04:19:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:02.549 04:19:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:02.549 04:19:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:02.549 04:19:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.549 04:19:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.549 04:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.549 04:19:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.807 04:19:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:02.807 04:19:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:02.807 04:19:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:02.807 04:19:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.807 04:19:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.807 04:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.807 04:19:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:03.064 04:19:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:03.064 04:19:18 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:03.064 04:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:03.322 04:19:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:03.322 04:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:03.580 04:19:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:03.580 04:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.580 04:19:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:03.838 04:19:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:03.838 04:19:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.qmcuGQJOOD 00:36:03.838 04:19:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:03.838 04:19:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:03.838 04:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:04.096 [2024-07-25 04:19:19.175911] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qmcuGQJOOD': 0100660 00:36:04.096 [2024-07-25 04:19:19.175951] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:04.096 request: 00:36:04.096 { 00:36:04.096 "name": "key0", 00:36:04.096 "path": "/tmp/tmp.qmcuGQJOOD", 00:36:04.096 "method": "keyring_file_add_key", 00:36:04.096 "req_id": 1 00:36:04.096 } 00:36:04.096 Got JSON-RPC error response 00:36:04.096 response: 00:36:04.096 { 00:36:04.096 "code": -1, 00:36:04.096 "message": "Operation not permitted" 00:36:04.096 } 00:36:04.096 04:19:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:04.096 04:19:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:04.096 04:19:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:04.096 04:19:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:04.096 04:19:19 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.qmcuGQJOOD 00:36:04.096 04:19:19 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:04.096 04:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qmcuGQJOOD 00:36:04.354 04:19:19 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.qmcuGQJOOD 00:36:04.354 04:19:19 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:04.354 04:19:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:04.354 04:19:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:04.354 04:19:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.354 04:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.354 04:19:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:04.612 04:19:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:04.612 04:19:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:04.612 04:19:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.613 04:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.613 [2024-07-25 04:19:19.897887] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qmcuGQJOOD': No such file or directory 00:36:04.613 [2024-07-25 04:19:19.897926] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:04.613 [2024-07-25 04:19:19.897967] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:04.613 [2024-07-25 04:19:19.897980] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:04.613 [2024-07-25 04:19:19.897993] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:04.613 request: 00:36:04.613 { 00:36:04.613 "name": "nvme0", 00:36:04.613 "trtype": "tcp", 00:36:04.613 "traddr": "127.0.0.1", 00:36:04.613 "adrfam": "ipv4", 00:36:04.613 "trsvcid": "4420", 00:36:04.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.613 "prchk_reftag": false, 00:36:04.613 "prchk_guard": false, 00:36:04.613 "hdgst": false, 00:36:04.613 "ddgst": false, 00:36:04.613 "psk": "key0", 00:36:04.613 "method": "bdev_nvme_attach_controller", 00:36:04.613 "req_id": 1 00:36:04.613 } 00:36:04.613 Got JSON-RPC error response 00:36:04.613 response: 00:36:04.613 { 00:36:04.613 "code": -19, 00:36:04.613 "message": "No such device" 00:36:04.613 } 00:36:04.871 04:19:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:04.871 04:19:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:04.871 04:19:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:04.871 04:19:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:04.871 04:19:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:04.871 04:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:05.129 04:19:20 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AVoJikiNQh 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:05.129 04:19:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AVoJikiNQh 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AVoJikiNQh 00:36:05.129 04:19:20 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.AVoJikiNQh 00:36:05.129 04:19:20 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AVoJikiNQh 00:36:05.129 04:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AVoJikiNQh 00:36:05.387 04:19:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.387 04:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.645 nvme0n1 00:36:05.645 04:19:20 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:05.645 04:19:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:05.645 04:19:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:05.645 04:19:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:05.645 04:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:05.645 04:19:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:05.903 04:19:21 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:05.903 04:19:21 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:05.903 04:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:06.161 04:19:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:06.161 04:19:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:06.161 04:19:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.161 04:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.161 04:19:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.418 04:19:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:06.418 04:19:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:06.418 04:19:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.418 04:19:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.418 04:19:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.418 04:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.418 04:19:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.675 04:19:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:06.675 04:19:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:06.675 04:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:06.933 04:19:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:06.933 04:19:22 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:06.933 04:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.190 04:19:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:07.190 04:19:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AVoJikiNQh 00:36:07.190 04:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AVoJikiNQh 00:36:07.448 04:19:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tMZBgW8LkK 00:36:07.448 04:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tMZBgW8LkK 00:36:07.706 04:19:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:07.706 04:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:07.964 nvme0n1 00:36:07.964 04:19:23 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:07.964 04:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:08.231 04:19:23 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:08.231 "subsystems": [ 00:36:08.231 { 00:36:08.231 "subsystem": "keyring", 00:36:08.231 "config": [ 00:36:08.231 { 00:36:08.231 "method": "keyring_file_add_key", 00:36:08.231 "params": { 00:36:08.231 "name": "key0", 00:36:08.231 "path": "/tmp/tmp.AVoJikiNQh" 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "keyring_file_add_key", 00:36:08.231 "params": { 00:36:08.231 "name": "key1", 00:36:08.231 "path": "/tmp/tmp.tMZBgW8LkK" 00:36:08.231 } 00:36:08.231 } 00:36:08.231 ] 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "subsystem": "iobuf", 00:36:08.231 "config": [ 00:36:08.231 { 00:36:08.231 "method": "iobuf_set_options", 00:36:08.231 "params": { 00:36:08.231 "small_pool_count": 8192, 00:36:08.231 "large_pool_count": 1024, 00:36:08.231 "small_bufsize": 8192, 00:36:08.231 "large_bufsize": 135168 00:36:08.231 } 00:36:08.231 } 00:36:08.231 ] 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "subsystem": "sock", 00:36:08.231 "config": [ 00:36:08.231 { 00:36:08.231 "method": "sock_set_default_impl", 00:36:08.231 "params": { 00:36:08.231 "impl_name": "posix" 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "sock_impl_set_options", 00:36:08.231 "params": { 00:36:08.231 "impl_name": "ssl", 00:36:08.231 "recv_buf_size": 4096, 00:36:08.231 "send_buf_size": 4096, 00:36:08.231 "enable_recv_pipe": true, 00:36:08.231 "enable_quickack": false, 00:36:08.231 "enable_placement_id": 0, 00:36:08.231 "enable_zerocopy_send_server": true, 00:36:08.231 "enable_zerocopy_send_client": false, 00:36:08.231 "zerocopy_threshold": 0, 00:36:08.231 "tls_version": 0, 00:36:08.231 "enable_ktls": false 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "sock_impl_set_options", 00:36:08.231 "params": { 00:36:08.231 "impl_name": "posix", 00:36:08.231 "recv_buf_size": 2097152, 00:36:08.231 "send_buf_size": 2097152, 00:36:08.231 "enable_recv_pipe": true, 00:36:08.231 "enable_quickack": false, 00:36:08.231 "enable_placement_id": 0, 00:36:08.231 "enable_zerocopy_send_server": true, 00:36:08.231 "enable_zerocopy_send_client": false, 00:36:08.231 "zerocopy_threshold": 0, 00:36:08.231 "tls_version": 0, 00:36:08.231 "enable_ktls": false 00:36:08.231 } 00:36:08.231 } 00:36:08.231 ] 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "subsystem": "vmd", 00:36:08.231 "config": [] 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "subsystem": "accel", 00:36:08.231 "config": [ 00:36:08.231 { 00:36:08.231 "method": "accel_set_options", 00:36:08.231 "params": { 00:36:08.231 "small_cache_size": 128, 00:36:08.231 "large_cache_size": 16, 00:36:08.231 "task_count": 2048, 00:36:08.231 "sequence_count": 2048, 00:36:08.231 "buf_count": 2048 00:36:08.231 } 00:36:08.231 } 00:36:08.231 ] 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "subsystem": "bdev", 00:36:08.231 "config": [ 00:36:08.231 { 00:36:08.231 "method": "bdev_set_options", 00:36:08.231 "params": { 00:36:08.231 "bdev_io_pool_size": 65535, 00:36:08.231 "bdev_io_cache_size": 256, 00:36:08.231 "bdev_auto_examine": true, 00:36:08.231 "iobuf_small_cache_size": 128, 00:36:08.231 "iobuf_large_cache_size": 16 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "bdev_raid_set_options", 00:36:08.231 "params": { 00:36:08.231 "process_window_size_kb": 1024, 00:36:08.231 "process_max_bandwidth_mb_sec": 0 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "bdev_iscsi_set_options", 00:36:08.231 "params": { 00:36:08.231 "timeout_sec": 30 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.231 "method": "bdev_nvme_set_options", 00:36:08.231 "params": { 00:36:08.231 "action_on_timeout": "none", 00:36:08.231 "timeout_us": 0, 00:36:08.231 "timeout_admin_us": 0, 00:36:08.231 "keep_alive_timeout_ms": 10000, 00:36:08.231 "arbitration_burst": 0, 00:36:08.231 "low_priority_weight": 0, 00:36:08.231 "medium_priority_weight": 0, 00:36:08.231 "high_priority_weight": 0, 00:36:08.231 "nvme_adminq_poll_period_us": 10000, 00:36:08.231 "nvme_ioq_poll_period_us": 0, 00:36:08.231 "io_queue_requests": 512, 00:36:08.231 "delay_cmd_submit": true, 00:36:08.231 "transport_retry_count": 4, 00:36:08.231 "bdev_retry_count": 3, 00:36:08.231 "transport_ack_timeout": 0, 00:36:08.231 "ctrlr_loss_timeout_sec": 0, 00:36:08.231 "reconnect_delay_sec": 0, 00:36:08.231 "fast_io_fail_timeout_sec": 0, 00:36:08.231 "disable_auto_failback": false, 00:36:08.231 "generate_uuids": false, 00:36:08.231 "transport_tos": 0, 00:36:08.231 "nvme_error_stat": false, 00:36:08.231 "rdma_srq_size": 0, 00:36:08.231 "io_path_stat": false, 00:36:08.231 "allow_accel_sequence": false, 00:36:08.231 "rdma_max_cq_size": 0, 00:36:08.231 "rdma_cm_event_timeout_ms": 0, 00:36:08.231 "dhchap_digests": [ 00:36:08.231 "sha256", 00:36:08.231 "sha384", 00:36:08.231 "sha512" 00:36:08.231 ], 00:36:08.231 "dhchap_dhgroups": [ 00:36:08.231 "null", 00:36:08.231 "ffdhe2048", 00:36:08.231 "ffdhe3072", 00:36:08.231 "ffdhe4096", 00:36:08.231 "ffdhe6144", 00:36:08.231 "ffdhe8192" 00:36:08.231 ] 00:36:08.231 } 00:36:08.231 }, 00:36:08.231 { 00:36:08.232 "method": "bdev_nvme_attach_controller", 00:36:08.232 "params": { 00:36:08.232 "name": "nvme0", 00:36:08.232 "trtype": "TCP", 00:36:08.232 "adrfam": "IPv4", 00:36:08.232 "traddr": "127.0.0.1", 00:36:08.232 "trsvcid": "4420", 00:36:08.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.232 "prchk_reftag": false, 00:36:08.232 "prchk_guard": false, 00:36:08.232 "ctrlr_loss_timeout_sec": 0, 00:36:08.232 "reconnect_delay_sec": 0, 00:36:08.232 "fast_io_fail_timeout_sec": 0, 00:36:08.232 "psk": "key0", 00:36:08.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.232 "hdgst": false, 00:36:08.232 "ddgst": false 00:36:08.232 } 00:36:08.232 }, 00:36:08.232 { 00:36:08.232 "method": "bdev_nvme_set_hotplug", 00:36:08.232 "params": { 00:36:08.232 "period_us": 100000, 00:36:08.232 "enable": false 00:36:08.232 } 00:36:08.232 }, 00:36:08.232 { 00:36:08.232 "method": "bdev_wait_for_examine" 00:36:08.232 } 00:36:08.232 ] 00:36:08.232 }, 00:36:08.232 { 00:36:08.232 "subsystem": "nbd", 00:36:08.232 "config": [] 00:36:08.232 } 00:36:08.232 ] 00:36:08.232 }' 00:36:08.232 04:19:23 keyring_file -- keyring/file.sh@114 -- # killprocess 1010557 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1010557 ']' 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1010557 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010557 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010557' 00:36:08.232 killing process with pid 1010557 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@969 -- # kill 1010557 00:36:08.232 Received shutdown signal, test time was about 1.000000 seconds 00:36:08.232 00:36:08.232 Latency(us) 00:36:08.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.232 =================================================================================================================== 00:36:08.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.232 04:19:23 keyring_file -- common/autotest_common.sh@974 -- # wait 1010557 00:36:08.490 04:19:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=1011920 00:36:08.490 04:19:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1011920 /var/tmp/bperf.sock 00:36:08.490 04:19:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1011920 ']' 00:36:08.490 04:19:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:08.490 04:19:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:08.490 04:19:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.490 04:19:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:08.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:08.490 04:19:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.490 04:19:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:08.490 "subsystems": [ 00:36:08.490 { 00:36:08.490 "subsystem": "keyring", 00:36:08.490 "config": [ 00:36:08.490 { 00:36:08.490 "method": "keyring_file_add_key", 00:36:08.490 "params": { 00:36:08.490 "name": "key0", 00:36:08.490 "path": "/tmp/tmp.AVoJikiNQh" 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "keyring_file_add_key", 00:36:08.490 "params": { 00:36:08.490 "name": "key1", 00:36:08.490 "path": "/tmp/tmp.tMZBgW8LkK" 00:36:08.490 } 00:36:08.490 } 00:36:08.490 ] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "iobuf", 00:36:08.490 "config": [ 00:36:08.490 { 00:36:08.490 "method": "iobuf_set_options", 00:36:08.490 "params": { 00:36:08.490 "small_pool_count": 8192, 00:36:08.490 "large_pool_count": 1024, 00:36:08.490 "small_bufsize": 8192, 00:36:08.490 "large_bufsize": 135168 00:36:08.490 } 00:36:08.490 } 00:36:08.490 ] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "sock", 00:36:08.490 "config": [ 00:36:08.490 { 00:36:08.490 "method": "sock_set_default_impl", 00:36:08.490 "params": { 00:36:08.490 "impl_name": "posix" 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "sock_impl_set_options", 00:36:08.490 "params": { 00:36:08.490 "impl_name": "ssl", 00:36:08.490 "recv_buf_size": 4096, 00:36:08.490 "send_buf_size": 4096, 00:36:08.490 "enable_recv_pipe": true, 00:36:08.490 "enable_quickack": false, 00:36:08.490 "enable_placement_id": 0, 00:36:08.490 "enable_zerocopy_send_server": true, 00:36:08.490 "enable_zerocopy_send_client": false, 00:36:08.490 "zerocopy_threshold": 0, 00:36:08.490 "tls_version": 0, 00:36:08.490 "enable_ktls": false 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "sock_impl_set_options", 00:36:08.490 "params": { 00:36:08.490 "impl_name": "posix", 00:36:08.490 "recv_buf_size": 2097152, 00:36:08.490 "send_buf_size": 2097152, 00:36:08.490 "enable_recv_pipe": true, 00:36:08.490 "enable_quickack": false, 00:36:08.490 "enable_placement_id": 0, 00:36:08.490 "enable_zerocopy_send_server": true, 00:36:08.490 "enable_zerocopy_send_client": false, 00:36:08.490 "zerocopy_threshold": 0, 00:36:08.490 "tls_version": 0, 00:36:08.490 "enable_ktls": false 00:36:08.490 } 00:36:08.490 } 00:36:08.490 ] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "vmd", 00:36:08.490 "config": [] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "accel", 00:36:08.490 "config": [ 00:36:08.490 { 00:36:08.490 "method": "accel_set_options", 00:36:08.490 "params": { 00:36:08.490 "small_cache_size": 128, 00:36:08.490 "large_cache_size": 16, 00:36:08.490 "task_count": 2048, 00:36:08.490 "sequence_count": 2048, 00:36:08.490 "buf_count": 2048 00:36:08.490 } 00:36:08.490 } 00:36:08.490 ] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "bdev", 00:36:08.490 "config": [ 00:36:08.490 { 00:36:08.490 "method": "bdev_set_options", 00:36:08.490 "params": { 00:36:08.490 "bdev_io_pool_size": 65535, 00:36:08.490 "bdev_io_cache_size": 256, 00:36:08.490 "bdev_auto_examine": true, 00:36:08.490 "iobuf_small_cache_size": 128, 00:36:08.490 "iobuf_large_cache_size": 16 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_raid_set_options", 00:36:08.490 "params": { 00:36:08.490 "process_window_size_kb": 1024, 00:36:08.490 "process_max_bandwidth_mb_sec": 0 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_iscsi_set_options", 00:36:08.490 "params": { 00:36:08.490 "timeout_sec": 30 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_nvme_set_options", 00:36:08.490 "params": { 00:36:08.490 "action_on_timeout": "none", 00:36:08.490 "timeout_us": 0, 00:36:08.490 "timeout_admin_us": 0, 00:36:08.490 "keep_alive_timeout_ms": 10000, 00:36:08.490 "arbitration_burst": 0, 00:36:08.490 "low_priority_weight": 0, 00:36:08.490 "medium_priority_weight": 0, 00:36:08.490 "high_priority_weight": 0, 00:36:08.490 "nvme_adminq_poll_period_us": 10000, 00:36:08.490 "nvme_ioq_poll_period_us": 0, 00:36:08.490 "io_queue_requests": 512, 00:36:08.490 "delay_cmd_submit": true, 00:36:08.490 "transport_retry_count": 4, 00:36:08.490 "bdev_retry_count": 3, 00:36:08.490 "transport_ack_timeout": 0, 00:36:08.490 "ctrlr_loss_timeout_sec": 0, 00:36:08.490 "reconnect_delay_sec": 0, 00:36:08.490 "fast_io_fail_timeout_sec": 0, 00:36:08.490 "disable_auto_failback": false, 00:36:08.490 "generate_uuids": false, 00:36:08.490 "transport_tos": 0, 00:36:08.490 "nvme_error_stat": false, 00:36:08.490 "rdma_srq_size": 0, 00:36:08.490 "io_path_stat": false, 00:36:08.490 "allow_accel_sequence": false, 00:36:08.490 "rdma_max_cq_size": 0, 00:36:08.490 "rdma_cm_event_timeout_ms": 0, 00:36:08.490 "dhchap_digests": [ 00:36:08.490 "sha256", 00:36:08.490 "sha384", 00:36:08.490 "sha512" 00:36:08.490 ], 00:36:08.490 "dhchap_dhgroups": [ 00:36:08.490 "null", 00:36:08.490 "ffdhe2048", 00:36:08.490 "ffdhe3072", 00:36:08.490 "ffdhe4096", 00:36:08.490 "ffdhe6144", 00:36:08.490 "ffdhe8192" 00:36:08.490 ] 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_nvme_attach_controller", 00:36:08.490 "params": { 00:36:08.490 "name": "nvme0", 00:36:08.490 "trtype": "TCP", 00:36:08.490 "adrfam": "IPv4", 00:36:08.490 "traddr": "127.0.0.1", 00:36:08.490 "trsvcid": "4420", 00:36:08.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.490 "prchk_reftag": false, 00:36:08.490 "prchk_guard": false, 00:36:08.490 "ctrlr_loss_timeout_sec": 0, 00:36:08.490 "reconnect_delay_sec": 0, 00:36:08.490 "fast_io_fail_timeout_sec": 0, 00:36:08.490 "psk": "key0", 00:36:08.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.490 "hdgst": false, 00:36:08.490 "ddgst": false 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_nvme_set_hotplug", 00:36:08.490 "params": { 00:36:08.490 "period_us": 100000, 00:36:08.490 "enable": false 00:36:08.490 } 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "method": "bdev_wait_for_examine" 00:36:08.490 } 00:36:08.490 ] 00:36:08.490 }, 00:36:08.490 { 00:36:08.490 "subsystem": "nbd", 00:36:08.490 "config": [] 00:36:08.491 } 00:36:08.491 ] 00:36:08.491 }' 00:36:08.491 04:19:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.491 [2024-07-25 04:19:23.681140] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:36:08.491 [2024-07-25 04:19:23.681257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011920 ] 00:36:08.491 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.491 [2024-07-25 04:19:23.714382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:08.491 [2024-07-25 04:19:23.748114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.748 [2024-07-25 04:19:23.839975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.748 [2024-07-25 04:19:24.025041] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:09.681 04:19:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:09.681 04:19:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:09.681 04:19:24 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.681 04:19:24 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:09.681 04:19:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:09.681 04:19:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.681 04:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.939 04:19:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:09.939 04:19:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:09.939 04:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:09.939 04:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.939 04:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.939 04:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.939 04:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:10.197 04:19:25 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:10.197 04:19:25 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:10.197 04:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:10.197 04:19:25 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:10.458 04:19:25 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:10.458 04:19:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:10.458 04:19:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AVoJikiNQh /tmp/tmp.tMZBgW8LkK 00:36:10.458 04:19:25 keyring_file -- keyring/file.sh@20 -- # killprocess 1011920 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1011920 ']' 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1011920 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1011920 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1011920' 00:36:10.458 killing process with pid 1011920 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@969 -- # kill 1011920 00:36:10.458 Received shutdown signal, test time was about 1.000000 seconds 00:36:10.458 00:36:10.458 Latency(us) 00:36:10.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.458 =================================================================================================================== 00:36:10.458 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:10.458 04:19:25 keyring_file -- common/autotest_common.sh@974 -- # wait 1011920 00:36:10.721 04:19:25 keyring_file -- keyring/file.sh@21 -- # killprocess 1010547 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1010547 ']' 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1010547 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010547 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010547' 00:36:10.721 killing process with pid 1010547 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@969 -- # kill 1010547 00:36:10.721 [2024-07-25 04:19:25.888301] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:10.721 04:19:25 keyring_file -- common/autotest_common.sh@974 -- # wait 1010547 00:36:11.288 00:36:11.288 real 0m14.009s 00:36:11.288 user 0m34.722s 00:36:11.288 sys 0m3.292s 00:36:11.288 04:19:26 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:11.288 04:19:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.288 ************************************ 00:36:11.288 END TEST keyring_file 00:36:11.288 ************************************ 00:36:11.288 04:19:26 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:36:11.288 04:19:26 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:11.288 04:19:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:11.288 04:19:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:11.288 04:19:26 -- common/autotest_common.sh@10 -- # set +x 00:36:11.288 ************************************ 00:36:11.288 START TEST keyring_linux 00:36:11.288 ************************************ 00:36:11.288 04:19:26 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:11.288 * Looking for test storage... 00:36:11.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:11.288 04:19:26 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.288 04:19:26 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.288 04:19:26 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.288 04:19:26 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.288 04:19:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.288 04:19:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.288 04:19:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.288 04:19:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:11.288 04:19:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:11.288 04:19:26 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:11.288 04:19:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:11.288 04:19:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:11.288 04:19:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:11.289 /tmp/:spdk-test:key0 00:36:11.289 04:19:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:11.289 04:19:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:11.289 04:19:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:11.289 /tmp/:spdk-test:key1 00:36:11.289 04:19:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1012370 00:36:11.289 04:19:26 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:11.289 04:19:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1012370 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1012370 ']' 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:11.289 04:19:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:11.289 [2024-07-25 04:19:26.547369] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:36:11.289 [2024-07-25 04:19:26.547469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012370 ] 00:36:11.289 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.289 [2024-07-25 04:19:26.581339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:11.547 [2024-07-25 04:19:26.624998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.547 [2024-07-25 04:19:26.710569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.804 04:19:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:11.804 04:19:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:11.804 04:19:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:11.804 04:19:26 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.804 04:19:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:11.804 [2024-07-25 04:19:26.960836] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.804 null0 00:36:11.804 [2024-07-25 04:19:26.992876] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:11.804 [2024-07-25 04:19:26.993320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:11.804 04:19:27 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.804 04:19:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:11.804 695402283 00:36:11.804 04:19:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:11.804 730457455 00:36:11.804 04:19:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1012405 00:36:11.804 04:19:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:11.804 04:19:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1012405 /var/tmp/bperf.sock 00:36:11.804 04:19:27 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1012405 ']' 00:36:11.804 04:19:27 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:11.805 04:19:27 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:11.805 04:19:27 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:11.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:11.805 04:19:27 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:11.805 04:19:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:11.805 [2024-07-25 04:19:27.055819] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:36:11.805 [2024-07-25 04:19:27.055890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012405 ] 00:36:11.805 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.805 [2024-07-25 04:19:27.088558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:12.062 [2024-07-25 04:19:27.116676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.062 [2024-07-25 04:19:27.201879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.062 04:19:27 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:12.062 04:19:27 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:12.062 04:19:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:12.062 04:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:12.319 04:19:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:12.319 04:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:12.577 04:19:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:12.577 04:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:12.834 [2024-07-25 04:19:28.070563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:13.091 nvme0n1 00:36:13.091 04:19:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:13.091 04:19:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:13.091 04:19:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:13.091 04:19:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:13.091 04:19:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:13.091 04:19:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.348 04:19:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:13.348 04:19:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:13.348 04:19:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:13.348 04:19:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:13.348 04:19:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.348 04:19:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:13.348 04:19:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@25 -- # sn=695402283 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 695402283 == \6\9\5\4\0\2\2\8\3 ]] 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 695402283 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:13.607 04:19:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.607 Running I/O for 1 seconds... 00:36:14.540 00:36:14.540 Latency(us) 00:36:14.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.540 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:14.540 nvme0n1 : 1.02 5072.03 19.81 0.00 0.00 25014.54 7233.23 33010.73 00:36:14.540 =================================================================================================================== 00:36:14.540 Total : 5072.03 19.81 0.00 0.00 25014.54 7233.23 33010.73 00:36:14.540 0 00:36:14.540 04:19:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:14.540 04:19:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:14.798 04:19:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:14.798 04:19:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:14.798 04:19:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:14.798 04:19:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:14.798 04:19:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.798 04:19:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:15.056 04:19:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:15.056 04:19:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:15.056 04:19:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:15.056 04:19:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:15.056 04:19:30 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.056 04:19:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.314 [2024-07-25 04:19:30.556579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:15.314 [2024-07-25 04:19:30.556848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205aa00 (107): Transport endpoint is not connected 00:36:15.314 [2024-07-25 04:19:30.557836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205aa00 (9): Bad file descriptor 00:36:15.314 [2024-07-25 04:19:30.558841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:15.314 [2024-07-25 04:19:30.558866] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:15.314 [2024-07-25 04:19:30.558882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:15.314 request: 00:36:15.314 { 00:36:15.314 "name": "nvme0", 00:36:15.314 "trtype": "tcp", 00:36:15.314 "traddr": "127.0.0.1", 00:36:15.314 "adrfam": "ipv4", 00:36:15.314 "trsvcid": "4420", 00:36:15.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.314 "prchk_reftag": false, 00:36:15.314 "prchk_guard": false, 00:36:15.314 "hdgst": false, 00:36:15.314 "ddgst": false, 00:36:15.314 "psk": ":spdk-test:key1", 00:36:15.314 "method": "bdev_nvme_attach_controller", 00:36:15.314 "req_id": 1 00:36:15.314 } 00:36:15.314 Got JSON-RPC error response 00:36:15.314 response: 00:36:15.314 { 00:36:15.314 "code": -5, 00:36:15.314 "message": "Input/output error" 00:36:15.314 } 00:36:15.314 04:19:30 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:15.314 04:19:30 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:15.314 04:19:30 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:15.314 04:19:30 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@33 -- # sn=695402283 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 695402283 00:36:15.314 1 links removed 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:15.314 04:19:30 keyring_linux -- keyring/linux.sh@33 -- # sn=730457455 00:36:15.315 04:19:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 730457455 00:36:15.315 1 links removed 00:36:15.315 04:19:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1012405 00:36:15.315 04:19:30 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1012405 ']' 00:36:15.315 04:19:30 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1012405 00:36:15.315 04:19:30 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:15.315 04:19:30 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.315 04:19:30 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1012405 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1012405' 00:36:15.573 killing process with pid 1012405 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 1012405 00:36:15.573 Received shutdown signal, test time was about 1.000000 seconds 00:36:15.573 00:36:15.573 Latency(us) 00:36:15.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.573 =================================================================================================================== 00:36:15.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 1012405 00:36:15.573 04:19:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1012370 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1012370 ']' 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1012370 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1012370 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1012370' 00:36:15.573 killing process with pid 1012370 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 1012370 00:36:15.573 04:19:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 1012370 00:36:16.141 00:36:16.141 real 0m4.960s 00:36:16.141 user 0m9.234s 00:36:16.141 sys 0m1.603s 00:36:16.141 04:19:31 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:16.141 04:19:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:16.141 ************************************ 00:36:16.141 END TEST keyring_linux 00:36:16.141 ************************************ 00:36:16.141 04:19:31 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:16.141 04:19:31 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:16.141 04:19:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:16.141 04:19:31 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:36:16.141 04:19:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:16.141 04:19:31 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:16.142 04:19:31 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:16.142 04:19:31 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:16.142 04:19:31 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:16.142 04:19:31 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:16.142 04:19:31 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:16.142 04:19:31 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:16.142 04:19:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.142 04:19:31 -- common/autotest_common.sh@10 -- # set +x 00:36:16.142 04:19:31 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:16.142 04:19:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:16.142 04:19:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:16.142 04:19:31 -- common/autotest_common.sh@10 -- # set +x 00:36:18.043 INFO: APP EXITING 00:36:18.043 INFO: killing all VMs 00:36:18.043 INFO: killing vhost app 00:36:18.043 INFO: EXIT DONE 00:36:18.977 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:18.977 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:18.977 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:18.977 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:18.977 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:18.977 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:18.977 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:18.977 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:18.977 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:18.977 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:18.977 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:18.977 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:18.977 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:19.234 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:19.234 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:19.234 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:19.234 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:20.606 Cleaning 00:36:20.606 Removing: /var/run/dpdk/spdk0/config 00:36:20.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:20.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:20.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:20.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:20.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:20.607 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:20.607 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:20.607 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:20.607 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:20.607 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:20.607 Removing: /var/run/dpdk/spdk1/config 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:20.607 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:20.607 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:20.607 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:20.607 Removing: /var/run/dpdk/spdk2/config 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:20.607 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:20.607 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:20.607 Removing: /var/run/dpdk/spdk3/config 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:20.607 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:20.607 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:20.607 Removing: /var/run/dpdk/spdk4/config 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:20.607 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:20.607 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:20.607 Removing: /dev/shm/bdev_svc_trace.1 00:36:20.607 Removing: /dev/shm/nvmf_trace.0 00:36:20.607 Removing: /dev/shm/spdk_tgt_trace.pid696470 00:36:20.607 Removing: /var/run/dpdk/spdk0 00:36:20.607 Removing: /var/run/dpdk/spdk1 00:36:20.607 Removing: /var/run/dpdk/spdk2 00:36:20.607 Removing: /var/run/dpdk/spdk3 00:36:20.607 Removing: /var/run/dpdk/spdk4 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1005145 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1005492 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1005884 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1007434 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1007830 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1008109 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1010547 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1010557 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1011920 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1012370 00:36:20.607 Removing: /var/run/dpdk/spdk_pid1012405 00:36:20.607 Removing: /var/run/dpdk/spdk_pid694922 00:36:20.607 Removing: /var/run/dpdk/spdk_pid695652 00:36:20.607 Removing: /var/run/dpdk/spdk_pid696470 00:36:20.607 Removing: /var/run/dpdk/spdk_pid696900 00:36:20.607 Removing: /var/run/dpdk/spdk_pid697587 00:36:20.607 Removing: /var/run/dpdk/spdk_pid697729 00:36:20.607 Removing: /var/run/dpdk/spdk_pid698447 00:36:20.607 Removing: /var/run/dpdk/spdk_pid698457 00:36:20.607 Removing: /var/run/dpdk/spdk_pid698699 00:36:20.607 Removing: /var/run/dpdk/spdk_pid700015 00:36:20.607 Removing: /var/run/dpdk/spdk_pid700942 00:36:20.607 Removing: /var/run/dpdk/spdk_pid701245 00:36:20.607 Removing: /var/run/dpdk/spdk_pid701437 00:36:20.607 Removing: /var/run/dpdk/spdk_pid701641 00:36:20.607 Removing: /var/run/dpdk/spdk_pid701827 00:36:20.607 Removing: /var/run/dpdk/spdk_pid701993 00:36:20.607 Removing: /var/run/dpdk/spdk_pid702145 00:36:20.607 Removing: /var/run/dpdk/spdk_pid702327 00:36:20.607 Removing: /var/run/dpdk/spdk_pid702638 00:36:20.607 Removing: /var/run/dpdk/spdk_pid704989 00:36:20.607 Removing: /var/run/dpdk/spdk_pid705153 00:36:20.607 Removing: /var/run/dpdk/spdk_pid705313 00:36:20.607 Removing: /var/run/dpdk/spdk_pid705446 00:36:20.607 Removing: /var/run/dpdk/spdk_pid705747 00:36:20.607 Removing: /var/run/dpdk/spdk_pid705756 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706181 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706190 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706479 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706490 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706656 00:36:20.607 Removing: /var/run/dpdk/spdk_pid706786 00:36:20.607 Removing: /var/run/dpdk/spdk_pid707156 00:36:20.607 Removing: /var/run/dpdk/spdk_pid707308 00:36:20.607 Removing: /var/run/dpdk/spdk_pid707505 00:36:20.607 Removing: /var/run/dpdk/spdk_pid709578 00:36:20.607 Removing: /var/run/dpdk/spdk_pid712076 00:36:20.607 Removing: /var/run/dpdk/spdk_pid719182 00:36:20.607 Removing: /var/run/dpdk/spdk_pid719588 00:36:20.607 Removing: /var/run/dpdk/spdk_pid721996 00:36:20.607 Removing: /var/run/dpdk/spdk_pid722280 00:36:20.607 Removing: /var/run/dpdk/spdk_pid724899 00:36:20.607 Removing: /var/run/dpdk/spdk_pid729113 00:36:20.607 Removing: /var/run/dpdk/spdk_pid731175 00:36:20.607 Removing: /var/run/dpdk/spdk_pid737564 00:36:20.607 Removing: /var/run/dpdk/spdk_pid742774 00:36:20.607 Removing: /var/run/dpdk/spdk_pid743974 00:36:20.607 Removing: /var/run/dpdk/spdk_pid744647 00:36:20.607 Removing: /var/run/dpdk/spdk_pid754872 00:36:20.607 Removing: /var/run/dpdk/spdk_pid757018 00:36:20.607 Removing: /var/run/dpdk/spdk_pid810952 00:36:20.607 Removing: /var/run/dpdk/spdk_pid814114 00:36:20.607 Removing: /var/run/dpdk/spdk_pid818059 00:36:20.607 Removing: /var/run/dpdk/spdk_pid821881 00:36:20.607 Removing: /var/run/dpdk/spdk_pid821884 00:36:20.607 Removing: /var/run/dpdk/spdk_pid823039 00:36:20.607 Removing: /var/run/dpdk/spdk_pid823577 00:36:20.607 Removing: /var/run/dpdk/spdk_pid824227 00:36:20.607 Removing: /var/run/dpdk/spdk_pid824632 00:36:20.607 Removing: /var/run/dpdk/spdk_pid824639 00:36:20.607 Removing: /var/run/dpdk/spdk_pid824896 00:36:20.607 Removing: /var/run/dpdk/spdk_pid824909 00:36:20.607 Removing: /var/run/dpdk/spdk_pid825031 00:36:20.607 Removing: /var/run/dpdk/spdk_pid825570 00:36:20.607 Removing: /var/run/dpdk/spdk_pid826219 00:36:20.607 Removing: /var/run/dpdk/spdk_pid826873 00:36:20.607 Removing: /var/run/dpdk/spdk_pid827292 00:36:20.607 Removing: /var/run/dpdk/spdk_pid827294 00:36:20.607 Removing: /var/run/dpdk/spdk_pid827439 00:36:20.607 Removing: /var/run/dpdk/spdk_pid828315 00:36:20.607 Removing: /var/run/dpdk/spdk_pid829035 00:36:20.607 Removing: /var/run/dpdk/spdk_pid834355 00:36:20.607 Removing: /var/run/dpdk/spdk_pid859505 00:36:20.607 Removing: /var/run/dpdk/spdk_pid862290 00:36:20.607 Removing: /var/run/dpdk/spdk_pid863467 00:36:20.607 Removing: /var/run/dpdk/spdk_pid864780 00:36:20.607 Removing: /var/run/dpdk/spdk_pid864851 00:36:20.607 Removing: /var/run/dpdk/spdk_pid864932 00:36:20.607 Removing: /var/run/dpdk/spdk_pid865068 00:36:20.607 Removing: /var/run/dpdk/spdk_pid865435 00:36:20.607 Removing: /var/run/dpdk/spdk_pid866697 00:36:20.607 Removing: /var/run/dpdk/spdk_pid867415 00:36:20.607 Removing: /var/run/dpdk/spdk_pid867726 00:36:20.607 Removing: /var/run/dpdk/spdk_pid869348 00:36:20.607 Removing: /var/run/dpdk/spdk_pid869773 00:36:20.607 Removing: /var/run/dpdk/spdk_pid870301 00:36:20.607 Removing: /var/run/dpdk/spdk_pid873214 00:36:20.607 Removing: /var/run/dpdk/spdk_pid876585 00:36:20.607 Removing: /var/run/dpdk/spdk_pid880121 00:36:20.607 Removing: /var/run/dpdk/spdk_pid903610 00:36:20.607 Removing: /var/run/dpdk/spdk_pid906364 00:36:20.607 Removing: /var/run/dpdk/spdk_pid910134 00:36:20.607 Removing: /var/run/dpdk/spdk_pid911085 00:36:20.607 Removing: /var/run/dpdk/spdk_pid912173 00:36:20.607 Removing: /var/run/dpdk/spdk_pid914754 00:36:20.607 Removing: /var/run/dpdk/spdk_pid917100 00:36:20.607 Removing: /var/run/dpdk/spdk_pid921193 00:36:20.607 Removing: /var/run/dpdk/spdk_pid921324 00:36:20.607 Removing: /var/run/dpdk/spdk_pid924097 00:36:20.865 Removing: /var/run/dpdk/spdk_pid924227 00:36:20.865 Removing: /var/run/dpdk/spdk_pid924363 00:36:20.865 Removing: /var/run/dpdk/spdk_pid924636 00:36:20.865 Removing: /var/run/dpdk/spdk_pid924730 00:36:20.865 Removing: /var/run/dpdk/spdk_pid925717 00:36:20.865 Removing: /var/run/dpdk/spdk_pid927005 00:36:20.865 Removing: /var/run/dpdk/spdk_pid928181 00:36:20.865 Removing: /var/run/dpdk/spdk_pid929359 00:36:20.865 Removing: /var/run/dpdk/spdk_pid930541 00:36:20.865 Removing: /var/run/dpdk/spdk_pid931802 00:36:20.865 Removing: /var/run/dpdk/spdk_pid936133 00:36:20.865 Removing: /var/run/dpdk/spdk_pid936470 00:36:20.865 Removing: /var/run/dpdk/spdk_pid937860 00:36:20.865 Removing: /var/run/dpdk/spdk_pid938600 00:36:20.865 Removing: /var/run/dpdk/spdk_pid942185 00:36:20.865 Removing: /var/run/dpdk/spdk_pid944150 00:36:20.865 Removing: /var/run/dpdk/spdk_pid947445 00:36:20.865 Removing: /var/run/dpdk/spdk_pid950890 00:36:20.865 Removing: /var/run/dpdk/spdk_pid957096 00:36:20.865 Removing: /var/run/dpdk/spdk_pid961556 00:36:20.865 Removing: /var/run/dpdk/spdk_pid961558 00:36:20.865 Removing: /var/run/dpdk/spdk_pid974386 00:36:20.865 Removing: /var/run/dpdk/spdk_pid974795 00:36:20.865 Removing: /var/run/dpdk/spdk_pid975200 00:36:20.865 Removing: /var/run/dpdk/spdk_pid975726 00:36:20.865 Removing: /var/run/dpdk/spdk_pid976187 00:36:20.865 Removing: /var/run/dpdk/spdk_pid976675 00:36:20.865 Removing: /var/run/dpdk/spdk_pid977115 00:36:20.865 Removing: /var/run/dpdk/spdk_pid977525 00:36:20.865 Removing: /var/run/dpdk/spdk_pid979899 00:36:20.865 Removing: /var/run/dpdk/spdk_pid980157 00:36:20.865 Removing: /var/run/dpdk/spdk_pid983940 00:36:20.865 Removing: /var/run/dpdk/spdk_pid983997 00:36:20.865 Removing: /var/run/dpdk/spdk_pid985722 00:36:20.865 Removing: /var/run/dpdk/spdk_pid990629 00:36:20.865 Removing: /var/run/dpdk/spdk_pid990634 00:36:20.865 Removing: /var/run/dpdk/spdk_pid993519 00:36:20.865 Removing: /var/run/dpdk/spdk_pid994800 00:36:20.865 Removing: /var/run/dpdk/spdk_pid996195 00:36:20.865 Removing: /var/run/dpdk/spdk_pid997102 00:36:20.865 Removing: /var/run/dpdk/spdk_pid999076 00:36:20.865 Removing: /var/run/dpdk/spdk_pid999835 00:36:20.865 Clean 00:36:20.865 04:19:36 -- common/autotest_common.sh@1451 -- # return 0 00:36:20.865 04:19:36 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:20.865 04:19:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:20.865 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 04:19:36 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:20.865 04:19:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:20.865 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 04:19:36 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:20.865 04:19:36 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:20.865 04:19:36 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:20.865 04:19:36 -- spdk/autotest.sh@395 -- # hash lcov 00:36:20.865 04:19:36 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:20.865 04:19:36 -- spdk/autotest.sh@397 -- # hostname 00:36:20.866 04:19:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:21.123 geninfo: WARNING: invalid characters removed from testname! 00:36:53.214 04:20:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.214 04:20:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.739 04:20:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:58.261 04:20:13 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:01.530 04:20:16 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:04.051 04:20:19 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:07.326 04:20:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:07.326 04:20:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.326 04:20:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:07.326 04:20:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.326 04:20:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.327 04:20:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.327 04:20:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.327 04:20:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.327 04:20:22 -- paths/export.sh@5 -- $ export PATH 00:37:07.327 04:20:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.327 04:20:22 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:07.327 04:20:22 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:07.327 04:20:22 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721874022.XXXXXX 00:37:07.327 04:20:22 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721874022.D6KXUV 00:37:07.327 04:20:22 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:07.327 04:20:22 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:37:07.327 04:20:22 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:07.327 04:20:22 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:07.327 04:20:22 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:07.327 04:20:22 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:07.327 04:20:22 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:07.327 04:20:22 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:37:07.327 04:20:22 -- common/autotest_common.sh@10 -- $ set +x 00:37:07.327 04:20:22 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:07.327 04:20:22 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:07.327 04:20:22 -- pm/common@17 -- $ local monitor 00:37:07.327 04:20:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.327 04:20:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.327 04:20:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.327 04:20:22 -- pm/common@21 -- $ date +%s 00:37:07.327 04:20:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.327 04:20:22 -- pm/common@21 -- $ date +%s 00:37:07.327 04:20:22 -- pm/common@25 -- $ sleep 1 00:37:07.327 04:20:22 -- pm/common@21 -- $ date +%s 00:37:07.327 04:20:22 -- pm/common@21 -- $ date +%s 00:37:07.327 04:20:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721874022 00:37:07.327 04:20:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721874022 00:37:07.327 04:20:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721874022 00:37:07.327 04:20:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721874022 00:37:07.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721874022_collect-vmstat.pm.log 00:37:07.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721874022_collect-cpu-load.pm.log 00:37:07.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721874022_collect-cpu-temp.pm.log 00:37:07.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721874022_collect-bmc-pm.bmc.pm.log 00:37:07.892 04:20:23 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:07.892 04:20:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:07.892 04:20:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:07.892 04:20:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:07.892 04:20:23 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:07.892 04:20:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:07.892 04:20:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:07.892 04:20:23 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:07.892 04:20:23 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:07.892 04:20:23 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:08.157 04:20:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:08.157 04:20:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:08.157 04:20:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:08.157 04:20:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:08.157 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.157 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:08.157 04:20:23 -- pm/common@44 -- $ pid=1023503 00:37:08.157 04:20:23 -- pm/common@50 -- $ kill -TERM 1023503 00:37:08.157 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.157 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:08.157 04:20:23 -- pm/common@44 -- $ pid=1023505 00:37:08.157 04:20:23 -- pm/common@50 -- $ kill -TERM 1023505 00:37:08.157 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.157 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:08.157 04:20:23 -- pm/common@44 -- $ pid=1023506 00:37:08.157 04:20:23 -- pm/common@50 -- $ kill -TERM 1023506 00:37:08.157 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.157 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:08.157 04:20:23 -- pm/common@44 -- $ pid=1023537 00:37:08.157 04:20:23 -- pm/common@50 -- $ sudo -E kill -TERM 1023537 00:37:08.157 + [[ -n 594813 ]] 00:37:08.157 + sudo kill 594813 00:37:08.217 [Pipeline] } 00:37:08.230 [Pipeline] // stage 00:37:08.233 [Pipeline] } 00:37:08.243 [Pipeline] // timeout 00:37:08.246 [Pipeline] } 00:37:08.256 [Pipeline] // catchError 00:37:08.259 [Pipeline] } 00:37:08.271 [Pipeline] // wrap 00:37:08.275 [Pipeline] } 00:37:08.284 [Pipeline] // catchError 00:37:08.289 [Pipeline] stage 00:37:08.290 [Pipeline] { (Epilogue) 00:37:08.298 [Pipeline] catchError 00:37:08.299 [Pipeline] { 00:37:08.307 [Pipeline] echo 00:37:08.308 Cleanup processes 00:37:08.311 [Pipeline] sh 00:37:08.590 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.590 1023638 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:08.590 1023770 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.602 [Pipeline] sh 00:37:08.885 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.885 ++ grep -v 'sudo pgrep' 00:37:08.885 ++ awk '{print $1}' 00:37:08.885 + sudo kill -9 1023638 00:37:08.896 [Pipeline] sh 00:37:09.176 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:19.153 [Pipeline] sh 00:37:19.438 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:19.438 Artifacts sizes are good 00:37:19.453 [Pipeline] archiveArtifacts 00:37:19.459 Archiving artifacts 00:37:19.701 [Pipeline] sh 00:37:19.984 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:20.000 [Pipeline] cleanWs 00:37:20.009 [WS-CLEANUP] Deleting project workspace... 00:37:20.010 [WS-CLEANUP] Deferred wipeout is used... 00:37:20.017 [WS-CLEANUP] done 00:37:20.018 [Pipeline] } 00:37:20.038 [Pipeline] // catchError 00:37:20.050 [Pipeline] sh 00:37:20.329 + logger -p user.info -t JENKINS-CI 00:37:20.338 [Pipeline] } 00:37:20.353 [Pipeline] // stage 00:37:20.358 [Pipeline] } 00:37:20.375 [Pipeline] // node 00:37:20.380 [Pipeline] End of Pipeline 00:37:20.437 Finished: SUCCESS